<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oleksandr Mizov</title>
    <description>The latest articles on DEV Community by Oleksandr Mizov (@mizovoo).</description>
    <link>https://dev.to/mizovoo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mizovoo"/>
    <language>en</language>
    <item>
      <title>Different ways to share the code</title>
      <dc:creator>Oleksandr Mizov</dc:creator>
      <pubDate>Fri, 02 Sep 2022 07:00:05 +0000</pubDate>
      <link>https://dev.to/mizovoo/different-ways-to-share-the-code-2o07</link>
      <guid>https://dev.to/mizovoo/different-ways-to-share-the-code-2o07</guid>
      <description>&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;Once our project has grown enough, you will ask: "How to reuse the code across functionalities, modules, and applications?"&lt;/p&gt;

&lt;p&gt;The topic of how to make the code reusable we have discussed in &lt;a href="https://dev.to/mizovoo/how-to-reuse-the-code-to-speed-up-the-development-1kfd"&gt;the previous article&lt;/a&gt;. But what are the ways to distribute code?&lt;/p&gt;

&lt;h2&gt;
  
  
  Code distribution
&lt;/h2&gt;

&lt;p&gt;There are different approaches to code distribution. They all have pros and cons, which we should evaluate before adopting a particular strategy. &lt;/p&gt;

&lt;h3&gt;
  
  
  Local code distribution
&lt;/h3&gt;

&lt;p&gt;Let's start from the place we finished in  &lt;a href="https://dev.to/mizovoo/how-to-reuse-the-code-to-speed-up-the-development-1kfd"&gt;the previous article&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TL; TR, we have our code organized as a set of small composable code units, and the functionalities are maintained as a composition of those reusable units.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When we spot that we need some already written code to develop the new functionality, it is a perfect time to extract the code units into the shared place in your project. That place is usually a dedicated place containing all shared code. So the critical point is we can reference that code anywhere in the application. It seems like a simple, straightforward solution for such a task. And I fully agree with it. The only thing we all miss is the caveats of that solution hidden behind its simplicity. To mitigate those pitfalls, first and foremost, we need to be aware of them:&lt;/p&gt;

&lt;p&gt;PROS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Straightforward&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt; to adopt&lt;/li&gt;
&lt;li&gt;Improved code &lt;strong&gt;quality&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No double&lt;/strong&gt; work - &lt;strong&gt;reusability&lt;/strong&gt; itself

&lt;ul&gt;
&lt;li&gt;Features sharing&lt;/li&gt;
&lt;li&gt;Bug fixes are done once and then available for all code users&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the pros lead us to the other side of the coin - the cons of that approach.&lt;/p&gt;

&lt;p&gt;CONS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single point of failure

&lt;ul&gt;
&lt;li&gt;Bug introduced once will get everywhere&lt;/li&gt;
&lt;li&gt;As a result, our code becomes more fragile&lt;/li&gt;
&lt;li&gt;That fragility increases maintenance costs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Maintenance cost&lt;/li&gt;
&lt;li&gt;Additional abstractions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fascinating thing here is that you never know if that behavior is a bug or if someone relies on it as a feature. As a result, you can fix the bug and break the other functionality that relies on that buggy behavior.&lt;/p&gt;

&lt;p&gt;Fortunately, we can leverage many patterns, principles, and best practices to mitigate those limitations, e.g., SOLID, GRASP, GoF patterns, etc. And especially automotive testing of the functionality will help avoid such regressions. The neat bonus of the approach described in &lt;a href="https://dev.to/mizovoo/how-to-reuse-the-code-to-speed-up-the-development-1kfd"&gt;the previous article&lt;/a&gt; is that you have already mitigated those issues, applying principles and covering code with unit tests. And if initially those tests and principles guarded a single functionality, now they are guarding several of them. With the project's growth, they will protect even more. Looks good so far, despite the one thing. One day we'll reveal the hidden cons of that approach - we can't share the code between several applications. So we need either leverage the code duplication or search for other solutions. &lt;/p&gt;

&lt;h3&gt;
  
  
  Repository code distribution
&lt;/h3&gt;

&lt;p&gt;The application we develop has grown, and we must decide how to distribute our code among other applications. We might pack the code into a deliverable package to achieve that. It is usually called a package. We already have all we need to do that - our shared source code placed in a dedicated directory in our application. So we need to pack it and publish it somewhere. Again and again, the initial simplicity hides the actual caveats of the approach. So let's take a glance at them and afterward discuss the details. &lt;/p&gt;

&lt;p&gt;PROS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code &lt;strong&gt;quality&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;No &lt;strong&gt;double work&lt;/strong&gt; - &lt;strong&gt;reusability&lt;/strong&gt; itself&lt;/li&gt;
&lt;li&gt;Code is &lt;strong&gt;distributed&lt;/strong&gt; across several apps&lt;/li&gt;
&lt;li&gt;Improved &lt;strong&gt;transparency&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CONS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single point of failure&lt;/li&gt;
&lt;li&gt;Additional abstractions&lt;/li&gt;
&lt;li&gt;Increased maintenance cost&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Additional complexity&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Harder to change&lt;/strong&gt; a released public code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As expected, we inherited all pros and cons from the previous solution and added several new ones. So it is ok, and it was expected. &lt;/p&gt;

&lt;p&gt;Let's take a look at the pros. Here we have slightly evolved characteristics of the code distributions. Now it can be used in several applications. But to make other developers use the library, you need to work on the library's &lt;strong&gt;transparency&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;While the same developers create and use the library, it is not so crucial. Ideally, the development team is familiar with the library codebase and aware of the recent and future changes. And if developers introduce changes to the shared code, they migrate all users of that code along the way. &lt;/p&gt;

&lt;p&gt;With the library development, such migration won't be possible or is not the desired approach. We need to concentrate on the library development, not the code migration (it is true if the code migration is not your business as well). So at that point, we get to the need for improved transparency. Some approaches could help library developers communicate with library users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code incapsulation to avoid exposing the package internals&lt;/li&gt;
&lt;li&gt;versioning&lt;/li&gt;
&lt;li&gt;changelogs&lt;/li&gt;
&lt;li&gt;migration guides&lt;/li&gt;
&lt;li&gt;migration utilities&lt;/li&gt;
&lt;li&gt;roadmaps&lt;/li&gt;
&lt;li&gt;documentation&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That transparency greatly benefits library developers, let alone library users. And unfortunately, all those benefits result in the increased maintenance cost, additional abstractions, and, as a result, increased complexity of the overall solution. And as a bonus, it is getting harder for library developers to change the behavior of the already available thing for the package users. As a result, breaking changes are becoming more costly for library developers and users. &lt;/p&gt;

&lt;p&gt;Library transparency requires a defined process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD process&lt;/li&gt;
&lt;li&gt;versioning strategy&lt;/li&gt;
&lt;li&gt;release process&lt;/li&gt;
&lt;li&gt;documentation process&lt;/li&gt;
&lt;li&gt;testing process&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have those things already defined, it could ease the adoption. Transparency is a vital thing for publicly shared libraries, and it helps build trust. Trust is one of the factors that determine the library's success. &lt;/p&gt;

&lt;p&gt;Trust is still essential for the private libraries used inside the organization, but it is less crucial. Mostly it is because you provide a unique product that can not be easily substituted. And you have some initial trust credit. Try not to overuse it because internal users of your package could duplicate desired functionality to avoid the changes your library could bring. So trust is vital in both cases. The difference is the initial trust credit you have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In summary&lt;/strong&gt;, I emphasize the &lt;strong&gt;increased complexity&lt;/strong&gt; of the solution compared to the locally distributed code and encourage you to evaluate the benefits of such migration first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monorepo code distribution
&lt;/h3&gt;

&lt;p&gt;Let's take a step back and discuss the intermediate solution that applies to some cases. If you only need to &lt;strong&gt;share the code between internal applications&lt;/strong&gt;, there is another option - use monorepo to organize the applications and shared code. That will allow other teams to work on their apps with the shared code available and ease the migration &lt;br&gt;
 to repository distributed code and its further development. In addition, it will decrease the upfront development and organization cost of the adoption compared with repository distributed code. &lt;/p&gt;

&lt;p&gt;With monorepo, you won't need to publish your libraries. Instead, you will be able to compile applications from the locally shared code. And evolutionary with the growth of the teams, you will need to gradually adopt the approaches that increase the transparency of the code. So eventually, publishing the libraries to the dedicated repository will be a matter of setting up CI/CD and a repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed code as a service
&lt;/h3&gt;

&lt;p&gt;We have adopted an approach of distributing code between several applications. And there is one limitation that I intentionally missed from the cons section - &lt;strong&gt;technologies limitation&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;We created the library gradually through an evolution process from the sources of our initial application that resulted in technologies looking for our reusable code. We locked on our chosen programming language, paradigms, framework, and libraries. With frameworks and libraries, we can always refactor the code and remove/ extract the coupling to another layer of the libs. But it is typical when we need to share the functionality between applications written in different programming languages. &lt;/p&gt;

&lt;p&gt;Here is the perfect spot for the other code distribution technique that can mitigate technological limitations - &lt;strong&gt;distributed code as a service&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We won't eradicate limitations because we still need to depend on some lower-level abstractions. (E.g., transport level abstractions, communication protocols, etc.) It won't be an issue for us until all our clients support them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Distributed code as a service - is an approach that states the possibility of moving the shared code to the server and executing it there. There are plenty of different ways to implement it. For example, it could be a REST API with the endpoints that serve the dedicated functionalities (reusable composed flows). Or lambdas for our small pure code units. Or you can build an event-driven service based on the continuously opened WS connections, etc. You should carefully evaluate all pros and cons of each of those sub-approaches and select one that serves your application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;CAUTION.&lt;/strong&gt; Don't try to chase the trends. If your back-end is organized as a monolith, there is a reason for that. And pursuing microservice trends by adding a new service for your shiny shared code could be a considerable overhead for your organization. That overhead would be in setting up either microservice infrastructure to manage all together or hosting it separately, which will spawn additional maintenance costs. On the other hand, if you see the benefits of the migration to microservices, it could be an excellent opportunity to initiate that process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's review the strong and weak sides of that approach. All of them are related to sub-approaches as well. The only thing that sub-approaches could adjust is that generic list with their own strong and weak sides and mitigations for the base approach's issues. &lt;/p&gt;

&lt;p&gt;PROS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code &lt;strong&gt;quality&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No double&lt;/strong&gt; work - reusability itself&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast updates&lt;/strong&gt; of the application

&lt;ul&gt;
&lt;li&gt;Sometimes, you can update only server implementation without needing an application release to fix/change the functionality.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Code is &lt;strong&gt;distributed&lt;/strong&gt; across applications written with different technologies&lt;/li&gt;
&lt;li&gt;Shrinked the &lt;strong&gt;bundle size&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Improved &lt;strong&gt;transparency&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regarding pros, here we see the same picture as before. We have decreased the &lt;strong&gt;technologies limitations&lt;/strong&gt; to the desired level. And as a neat bonus, we get the possibility of changing the application's behavior by changing just the server implementation. For example, it could be precious for applications with long releases review cycle that is out of your control (e.g., mobile and desktop applications with reviews in app stores).&lt;/p&gt;

&lt;p&gt;CONS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single point of failure&lt;/li&gt;
&lt;li&gt;Additional abstractions&lt;/li&gt;
&lt;li&gt;Increased maintenance cost&lt;/li&gt;
&lt;li&gt;Additional &lt;strong&gt;complexity&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Additional &lt;strong&gt;computational costs&lt;/strong&gt;*&lt;/li&gt;
&lt;li&gt;Increased &lt;strong&gt;latency&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Additional &lt;strong&gt;context switching&lt;/strong&gt;*

&lt;ul&gt;
&lt;li&gt;Code is split between several places and potentially written using different technologies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Technologies limitations

&lt;ul&gt;
&lt;li&gt;Bound to the selected protocol/ technologies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With each approach evolution, we are adding new cons faster than pros. However, it is expected because of the &lt;strong&gt;complexity of the application&lt;/strong&gt;, and the overall solution is growing rapidly. &lt;/p&gt;

&lt;p&gt;To the maintenance cost here, we &lt;strong&gt;added computational costs&lt;/strong&gt;. Previously, we have deligated those costs to our clients as minimal hardware requirements, which were spread gradually among all users. So we can mitigate it by mixing the current approach with the repository distribution approach if we could execute the shared code both for a server and for some clients' applications on the client's side and others on the server side. &lt;/p&gt;

&lt;p&gt;We will also have &lt;strong&gt;increased latency&lt;/strong&gt; because of the added client-server communication latency. However, there are ways to mitigate it. For example, we could optimize the hot paths and bottlenecks, adopting code duplication with all its pros and cons.&lt;/p&gt;

&lt;p&gt;For some clients' teams, we introduce additional &lt;strong&gt;context switching&lt;/strong&gt; required to contribute to the distributed server code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technologies limitations&lt;/strong&gt; are still with us, and it is ok until we hit their limits again. But we will work on solving the problem when we need to. Because, as you probably spot, limitations removal is a costly operation, and you should go for it only if there are clear and prevailing benefits for the project.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;CAUTION.&lt;/strong&gt; A considerable amount of projects drowned in complexity intended to solve the issues that were not even on the horizon. That is why I am advocating for an evolutionary approach - grow complexity as you need it and apply techniques that are beneficial for your application now. With project growth, their positive impact will grow exponentially and pay great interest.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;That is not an exclusive list of possible approaches to code distributions. I would be happy if you write in the comments other approaches that can be highlighted or start the dispute about described ones.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Let's sum it up&lt;/strong&gt; and highlight the main points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is an excellent palette of available approaches to distribute the shared code. They have pros and cons. First and foremost, you need to know your application and use case to evaluate the approaches and select one that suits your case well enough, satisfying all requirements and leaving a space for further growth.&lt;/li&gt;
&lt;li&gt;Solving more advanced tasks usually results in increased complexity of the overall solution. Be aware and prepared for that increase. Make sure that the benefits cover the expenses. &lt;/li&gt;
&lt;li&gt;Follow the evolutionary approach. Don't try to solve problems that are not even on the horizon. Acknowledge them and leave a space for the solution.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>architecture</category>
      <category>beginners</category>
      <category>librarydev</category>
    </item>
    <item>
      <title>How to reuse the code to speed up the development</title>
      <dc:creator>Oleksandr Mizov</dc:creator>
      <pubDate>Mon, 22 Aug 2022 08:02:42 +0000</pubDate>
      <link>https://dev.to/mizovoo/how-to-reuse-the-code-to-speed-up-the-development-1kfd</link>
      <guid>https://dev.to/mizovoo/how-to-reuse-the-code-to-speed-up-the-development-1kfd</guid>
      <description>&lt;p&gt;When you start reading about some topic or adopting the practice, It is crucial to ask yourself, &lt;strong&gt;"Why?"&lt;/strong&gt;. Why do I need it? Why is it essential for me? So let's start with why here. Why do we need to speed up the development? The answer is petty simple - to bring value to our users faster. Until our application gets to the users and starts serving their needs, it brings zero value and is literally useless. &lt;/p&gt;

&lt;p&gt;When you hear about reusability, you will probably remember the &lt;strong&gt;DRY&lt;/strong&gt; principle. DRY stands for "don't repeat yourself," saying that we should avoid code duplication and need to try to reuse as much code as possible. So must not write the same code twice. Instead, reuse what was already written. DRY is a pretty straightforward principle and sounds logical. It seems like that principle applies to all cases. Despite its simplicity, you should use it carefully in your application's code base. It hides some implicit drawbacks behind its simplicity, and it takes time to reveal them. And those drawbacks could result in significant development time losses, which will slow down our development. Quite the opposite to our initial goal, right? &lt;/p&gt;

&lt;p&gt;First and foremost, &lt;strong&gt;be careful applying the DRY principle&lt;/strong&gt; to significant chunks of the functionality and reusing them across different features. Those features code could be similar or even the same now, and extracting the reusable code could seem like a great idea. It will result in having less code and, therefore, fewer bugs. But you will create an implicit coupling of those two features. So despite that, currently, all is good and fine. &lt;/p&gt;

&lt;p&gt;The further the interesting it gets. After several iterations on our features, they could start evolving in entirely different directions. It will be an incremental evolution for sure. And slowly, our reusable code is getting newer and newer 'if' statements inside, and along with them, the feature's specific abstractions start leaking into the reusable code. Remember the coupling we have created? After a while, we could find ourselves desperately trying to glue two completely different functionalities together. That code will have tons of branching statements ('if', 'switch-case,' etc.) and the unrelated abstractions holding all that mess on their shoulders. Once, they will shrug, but we don't see it now. We need one more bug fix, and all will be perfect.&lt;/p&gt;

&lt;p&gt;Meanwhile, the code is becoming a single point of failure for both features, and with each new line, it is getting more fragile and sensible for any changes. Unfortunately, such code is hard to read, maintain, and extend. Additionally, it spawns an enormous amount of communication between the feature owners, required to devise how to squeeze another piece of the code inside our Frankenstein, let alone refactoring and bug fixes. You need to get familiar with two feature codebases to be able to change something safely. After some time, we end up with a situation where small change requires tremendous effort to implement and even more to test. And that is the dead-end the code needs to be completely refactored. &lt;/p&gt;

&lt;p&gt;I described the exaggerated case, for sure. The intention was to make the issues more prominent. The right question is, "How to avoid such situations? Should we duplicate all the code and avoid code reuse like the plague?" No, for sure not. The part I intended to highlight in the example was the size of the code we were trying to reuse. So it probably was a significant chunk of the common functionality that evolved differently for the different functionalities.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If the functionalities seem similar now, they could evolve in entirely different directions after several iterations.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Being aware of the evolutionary nature of modern applications is crucial. That awareness will make you more sensitive to such situations, and you will spot them at an early stage.&lt;br&gt;
The general recommendation at any stage would be to extract and reuse the &lt;strong&gt;smallest possible blocks&lt;/strong&gt; - your application's units.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Code unit could be class, function, method, etc.; the smaller the code block is, the easier it is to reuse.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having numerous small building blocks, we can compose them in various ways, crafting a unique processing flow for each particular use case without creating a tight coupling between any of those. Building blocks we need to keep simple and sweet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Numerous software development principles can help us keep that simplicity and sweetness - SOLID, GRASP, OOP, FP, etc. The most neat thing is that those principles are much easier to apply to small code parts rather than their big chunks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Is that idea a silver bullet? No, not at all. That approach has several pros and cons, and you should carefully evaluate them before applying them to your project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PROS:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improves code reusability.&lt;/li&gt;
&lt;li&gt;Improves code testability (unit tests are the easiest to adopt in general and especially for the small code units)&lt;/li&gt;
&lt;li&gt;Improves separation of concerns,&lt;/li&gt;
&lt;li&gt;Improves code locality&lt;/li&gt;
&lt;li&gt;Improves code readability, and it is getting easier to reason about it.&lt;/li&gt;
&lt;li&gt;Identifying the violation of software development principles and best practices and "bad code smells" is getting easier.&lt;/li&gt;
&lt;li&gt;Improves code composability; variously composing small building blocks, we achieve unique code execution flows for each use case.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adopting the automation testing practices and software development principles will help build a robust set of reusable code blocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CONS:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More abstractions in the code&lt;/li&gt;
&lt;li&gt;Longer onboarding time&lt;/li&gt;
&lt;li&gt;Higher upfront development cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More granular decomposition leads to a more significant number of the small building blocks, leading to increased demand for various abstractions. And in general, it increases the time required for the development. But it is only during the adoption of the approach. Once you get used to it, you will have a defined set of abstractions, and you will spend on that decomposition the same amount of time as you would do writing the code in a fashion that you did before. To ease the reuse of those units, you can collect them into the modules based on their responsibilities (use GRASP as a reference). That will help mitigate the cons of that approach as well.&lt;/p&gt;

&lt;p&gt;Another critical point is the &lt;strong&gt;purity&lt;/strong&gt; of the code units. To achieve the easiest reusability experience, keep the code units pure. Pure means without side effects. Side effects are any I/O operations or operations that change the state that is outside of the function. Pure units of the code produce the same output for the same input, no matter when and how many times in the raw it was executed. (To get a better understanding of purity, you can read more about pure functions and immutability)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Divide and &lt;del&gt;conquer&lt;/del&gt; compose&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the cons of the approach I highlighted is increased upfront development time, which seems to contradict our goal of decreasing it. It is partially valid only for the upfront cost. To gain interest, you need to invest at the beginning. The approach's benefits will be much more visible along with the projected growth. The high upfront costs could be unaffordable luxe at the beginning of the project, for the startup, for the prof of concepts and many other cases. As we stated at the beginning of the article, "the software is useless until it is delivered to its user and serves user needs". Here is the place for the last piece of the puzzle, which will reveal the whole picture.&lt;/p&gt;

&lt;p&gt;In the beginning, we discussed the evolutionary nature of the requirements for the software. We should avoid resisting that nature trying to prevent changes. Instead, we should embrace them. Literally speaking - aligning the natures of the requirements and the application. Because requirements and the application are just different representations of the same thing - requirements are the mental model, and the application is a physical representation. So we must follow the evolutionary approach while building the applications.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;evolutionary approach&lt;/strong&gt; is about expecting changes and leaving room for them. I want to emphasise we are not trying to predict the changes and bring some redundant flexibility (which you won't probably need). No, it is about leaving the space for new functionality and the ability to adapt to the new requirements.&lt;/p&gt;

&lt;p&gt;How does it work? As I said, our application is a set of small reusable units composed differently. So what do we do if we receive the new event utterly different set of requirements, we will recompose our robust, reusable units in a new way. Like LEGO blocks, we could build anything from them with sufficient quantity. &lt;/p&gt;

&lt;p&gt;It sounds excellent but is too generic, and it is unclear how it applies to particular cases. So let's imagine we just started working on the project; we don't need to reuse any code so far. It is great. You develop the features, and when you get the same functionality that needs to be implemented again, you start the code extraction. You don't need to extract it beforehand or create some overhead yet. Don't try to solve the issues that are not existing and probably will never exist*. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;*&lt;/strong&gt; At least in my experience, the probability of hitting the desired extensibility point was pretty low, and usually, that redundant flexibility created so dramatic maintenance overhead, which slowed down the evolution of our code) Literally speaking, it is like a premature** optimisation process, you sacrifice code's readability and maintainability optimising most likely code that is outside of the hot path or the bottleneck. &lt;br&gt;
&lt;strong&gt;**&lt;/strong&gt; By premature, I mean the optimisation that is not backed by the date. Ideally it should be production data from the user-facing system; otherwise, it is an optimisation of the assumptions that will increase the time to market for your product.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With the current approach, we are still not embracing the changes. We are just solving today's tasks. So how can we embrace them?&lt;br&gt;
We need two key things for that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to have &lt;strong&gt;low-cost extensibility points&lt;/strong&gt; in our code.&lt;/li&gt;
&lt;li&gt;We need to be &lt;strong&gt;confident&lt;/strong&gt; that new changes didn't affect existing functionality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's start with &lt;strong&gt;confidence&lt;/strong&gt;. To be sure that nothing is broken in our solution, we need to have our code covered with automated tests. Tests from all walks of life - unit tests, integration, e2e, etc.&lt;/p&gt;

&lt;p&gt;It is clear with confidence, so what with &lt;strong&gt;extensibility points&lt;/strong&gt; for our code? Here we get to the previously stated idea of dividing and composing things. So the idea is to split your code into small units from the beginning. The smaller pieces are - the more potential extensibility points you get. By that, I mean you will have more places where your application could adjust its behaviour to the new requirements. &lt;/p&gt;

&lt;p&gt;As a bonus, we get the improved &lt;strong&gt;testability&lt;/strong&gt; of the overall solution - smaller building blocks are much easier to cover with tests. Another plus is that we can minimise code branching. We can &lt;strong&gt;compose a new flow&lt;/strong&gt; by reusing old building blocks instead of trying to squeeze a new functionality within the existing one. The fewer branches in the code are, the more likely the code will have a single responsibility, and as a result, it will be much easier to reason about it. The neatest thing here is that you will get used to such granular development quickly, and it won't bring much overhead. Even if it will, you will save a lot of time on writing tests, reading and reasoning about the code, and debugging it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One more thing&lt;/strong&gt;, making code reusable and available for other parts of the application or other applications will be as easy as copying&amp;amp;pasting the code units from one file to another. That is what I mean by &lt;strong&gt;evolutionary development&lt;/strong&gt; - you are writing the code you need, which is beneficial here and now. And you are approaching it in a way that is embracing the future changes. That is like if you put 100$ in your bank account and get 90$ or even 101$ as a cashback immediately.&lt;/p&gt;




&lt;p&gt;Let's sum it up and highlight the main points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code duplication is ok&lt;/strong&gt; if you can describe its purpose. E.g., We are duplicating the composition of our code units with slight adjustments. &lt;em&gt;Why?&lt;/em&gt; To have two flows that have single responsibility each and avoid branching. &lt;em&gt;Why is it beneficial for us?&lt;/em&gt; That will improve code readability, testability, and maintainability and ease its future refactoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code reuse can speed up and slow down&lt;/strong&gt; the development process depending on the chosen approach.&lt;/li&gt;
&lt;li&gt;Divide the code into &lt;strong&gt;smaller pieces&lt;/strong&gt;, automate their testing and compose them in various manners to achieve different behaviours.&lt;/li&gt;
&lt;li&gt;Follow the &lt;strong&gt;evolutionary approach&lt;/strong&gt;. You don't need to predict the future, but you need to prepare for it.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>codequality</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
