<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aymane Harmaz</title>
    <description>The latest articles on DEV Community by Aymane Harmaz (@aharmaz).</description>
    <link>https://dev.to/aharmaz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aharmaz"/>
    <language>en</language>
    <item>
      <title>Software Architecture Styles : Monolith, Modulith, Micro-services, which option is better for you</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Wed, 25 Jun 2025 20:53:17 +0000</pubDate>
      <link>https://dev.to/aharmaz/software-architecture-styles-monolith-modulith-micro-services-which-option-is-better-for-you-fad</link>
      <guid>https://dev.to/aharmaz/software-architecture-styles-monolith-modulith-micro-services-which-option-is-better-for-you-fad</guid>
      <description>&lt;p&gt;As software developers we have multiple tools at our disposal so that we can build softwares, if we take the example of softwares built with Java, we have methods, and once we have a bunch of methods that are related we can group them together into classes, and these classes can be grouped in packages, and these packages can be externalized in modules.&lt;/p&gt;

&lt;p&gt;Software Architecture is all about how all these tools are linked and are having relationships between each other.&lt;/p&gt;

&lt;p&gt;In this article we are going to cover the most commonly used software architecture styles including monolith and micro-services, then we will introduce another style that is considered as a middle ground between the 2 styles and that is called modulith (modular monolith)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a monolithic application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rl5baqqstapudx9mwwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rl5baqqstapudx9mwwo.png" alt="monolith" width="800" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a monolithic style, there is single code base and everything is packaged and deployed as a single unit, &lt;/p&gt;

&lt;p&gt;The data used by a monolithic application is stored inside a single database.&lt;/p&gt;

&lt;p&gt;For small to medium-sized applications, that handle moderate data volumes, this architecture style remains very practical and efficient and Here are some of its advantages : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ease of development : developers can easily understand the flow of the processes of the application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simple to refactor and debug : with modern IDEs, these 2 tasks became easy and straight forward&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Low latency : calls are between functions belonging to the same process and there is no network overhead&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problems with monoliths starts when the application grows in terms of responsibilities, libraries, and data volumes that it should handle, in these scenarios we should deal with : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Inefficient scaling : if a single parts of the application gets heavy traffic, the only option to deal with this is to scale the whole application with all its parts &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Complicated team parallelization : with this architecture style, it is very easy to make the business domains coupled between each other, and the teams working on the same code base may run into merge conflicts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Poor resiliency : If one part of the application crashes, the whole application will be down.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is a micro-services application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu20sfgm18b74078pshdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu20sfgm18b74078pshdp.png" alt="micro-services" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is all about having independently deployed small softwares in terms of responsibility, running on separate processes and communicating between each other through network calls.&lt;/p&gt;

&lt;p&gt;each software is supposed to have its own database.&lt;/p&gt;

&lt;p&gt;This architecture style was introduced to overcome the challenges that were faced with monolithic applications when they get bigger. here are some of its advantages : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Selective scalability : It is possible to scale only the service that gets hot in terms of load and not the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clear context boundaries : each service should focus on a specific business domain and have a clear responsibility, this separation makes the maintainability much easier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fault tolerance and enhanced resiliency : If one service goes down, the other services can still continue serving received requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Micro-services does not offer those benefits for free, they also have  downsides : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Increased Latency : communication between services involves network calls, and who says network, says potentially unreliability, latency, timeouts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additional development complexity : with this style developers have to provide additional efforts for dealing with service discovery, data consistency and distributed logging / tracing / monitoring&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Is micro-services style better than monolithic style ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most of the time we feel like we have a negative view on the monolith, and on the other hand we have a very good impression on micro-services and this is generally because micro-services have been pushed by companies that are very dominent in the tech industry, However, this perspective is quite subjective.&lt;/p&gt;

&lt;p&gt;Monoliths should be seen as very useful at the low level where we will have a small team producing simple applications and being very productive on that without any of the overhead of micro-services &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modular Monolith (Modulith) : A middle ground&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdi0cx664c5sppsgj3u47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdi0cx664c5sppsgj3u47.png" alt="modular monolith" width="800" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is another architecture style other than monolith and micro-services, it is called modular monolith (modulith) and it focuses on breaking an app into business modules, those modules are all in the same code base so it is still a monolith, however their source codes are not tangled up with, the modules are isolated from each other, and the communication between them happens using APIs or events.&lt;/p&gt;

&lt;p&gt;When talking about modules in modular monolith, we are referring to business modules not to java modules or build tools modules, those are completely different concepts&lt;/p&gt;

&lt;p&gt;What is interesting about that style is that you can benefit from advantages of both monoliths and micro-services : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The structure of the application is clear making it easier for developers to understand the flows.&lt;/li&gt;
&lt;li&gt;The source code is in one place, and this point can be used to benefit from IDEs refactoring and debugging features&lt;/li&gt;
&lt;li&gt;There is no network overhead, meaning low latency.&lt;/li&gt;
&lt;li&gt;There is clear context boundaries between the modules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The things that you do not get from micro-services benefits are : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fault Tolerance (Resiliency)&lt;/li&gt;
&lt;li&gt;Selective Scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to do moduliths in Java&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Java, there are three main approaches for implementing moduliths :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Package per business module&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build tool module per business module (Using Maven or Gradle)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Java module per business module (JPMS)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Modularization using packages&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the simplest and most common approach. The application remains a single traditional Java module, and each business module is placed in a dedicated Java package (e.g., com.myapp.orders, com.myapp.billing, etc.).&lt;/p&gt;

&lt;p&gt;This approach is simple to implement, however it does not enforce strict boundaries between the business modules neither at compile-time nor at runtime, and in order to ensure a low coupling between the modules, efforts should be make at code reviews, without discipline the project can easily turns into a tangled monolith.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Modularization with Maven/Gradle multi-module projects&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this approach, each business module is split into its own Maven or Gradle project. Each module compiles to a separate JAR, and module dependencies are declared explicitly at the build configuration.&lt;/p&gt;

&lt;p&gt;The good point of this approach is that compile-time isolation is enforced by the build tool, a module can only access the modules that  it has as dependencies, this reduces the possibility to end up with a tangled monolith, however at runtime, all classes from all projects are available at all classes by reflection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Modularization with JPMS&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With JPMS (introduced in Java 9), each business module is placed in its own Java module, defined with a module-info.java file. &lt;/p&gt;

&lt;p&gt;JPMS enforces isolation between contexts both at compile-time and at runtime, however it is a bit complex to integrate with modern framework like Spring Boot&lt;/p&gt;

&lt;p&gt;In the next blog post I will introduce Spring Modulith a project that was introduced to help us building modular monolith in a simple and robust way using package modularization.&lt;/p&gt;

</description>
      <category>java</category>
      <category>architecture</category>
      <category>microservices</category>
      <category>development</category>
    </item>
    <item>
      <title>Database Transactions : Concurrency Control</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Wed, 26 Jun 2024 18:47:41 +0000</pubDate>
      <link>https://dev.to/aharmaz/database-transactions-concurrency-control-1h6i</link>
      <guid>https://dev.to/aharmaz/database-transactions-concurrency-control-1h6i</guid>
      <description>&lt;p&gt;The number of users who can use the database engine concurrently is a significant criteria for classifying the database management systems. A DBMS is single-user if at most one user at a time can use the engine and it is multiuser if many users can use it concurrently. Most of the DBMSs need to be multiuser, for example databases used in banks, insurance agencies, supermarkets should be multiuser, hundreds of thousands of users will be operating on the database by submitting transactions concurrently to the engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Concurrency Control is needed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When executing concurrent transaction in an uncontrolled way, many issues can rise like dirty read, non-repeatable read, phantom read, lost update. We will go through each of those issues.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Dirty Read&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Is a situation in which a transaction T1 reads the update of a transaction T2 which has not committed yet, then if T2 fails, then T1 would have read and would have worked with a value that does not exist and is incorrect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv556n5ytlfscffz54mh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv556n5ytlfscffz54mh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Non-repeatable Read&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Is a situation in which transaction T1 may read a given value from a table, if another transaction T2 lated updates that value and T1 reads that value again, it will see a different value from the first one it got initially.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zbcpe949c0b6vw6714k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zbcpe949c0b6vw6714k.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Phantom Read&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Is a situation in which a transaction T1 may read a set of rows from a table based on some condition specified in the query where clause, then transaction T2 will insert a new row that also satisfies the where clause condition used in T1, then if T1 tries to perform the same query it will get the newly added row this time&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lf1zjybrgy3niilcu54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lf1zjybrgy3niilcu54.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Lost Update&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;after the T1 commits the change by T2 will be lost considered as if it was never done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop4i8xdvj8ha1k8oyswt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop4i8xdvj8ha1k8oyswt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Dirty Write&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Is a situation in which one of the transactions takes an uncommitted value (dirty read), modifies it and saves it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri7i77n6fpwix8cpduib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri7i77n6fpwix8cpduib.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategies for dealing with concurrent transactions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have 2 options for controlling the execution of concurrent transactions, we either choose to run the access operation to a specific data item sequently across transactions, or we choose to parallelize the execution that access, there are multiple isolation levels that can be used for implementing these choices and each one of them may prevent and allow some of the issues related to concurrency.&lt;/p&gt;

&lt;p&gt;In the SQL standard, there are 4 isolation levels : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ra8rue7a7tu1eh5hu40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ra8rue7a7tu1eh5hu40.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Oracle, there are only 2 isolation levels : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v10jvcxus465687esf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v10jvcxus465687esf9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dirty Reads are not allowed since read committed is the lowest isolation level supported in Oracle&lt;/p&gt;

&lt;p&gt;In PostgreSQL, there are 4 isolation levels, and read committed and read uncommitted behaves in the same way : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vp9ou0v1rwc4gwdei9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vp9ou0v1rwc4gwdei9w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;read uncommitted and read committed behave in the same way and they don't allow dirty read and dirty write issues to happen&lt;/p&gt;

&lt;p&gt;In MySQL, there are 4 isolation levels, and read uncommitted prevents dirty write and allows dirty read : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv5h5dyg0b6t4u67lv42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv5h5dyg0b6t4u67lv42.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>transaction</category>
      <category>concurrenc</category>
      <category>performance</category>
    </item>
    <item>
      <title>Database Transactions : Basic Concepts</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Wed, 26 Jun 2024 11:18:01 +0000</pubDate>
      <link>https://dev.to/aharmaz/database-transactions-basic-concepts-2gl2</link>
      <guid>https://dev.to/aharmaz/database-transactions-basic-concepts-2gl2</guid>
      <description>&lt;p&gt;The concept of transaction provides a mechanism for describing logical units of database processing, there are a lot of systems with large databases and hundreds of concurrent users executing database transactions, examples of such systems include airline reservation, banking, supermarket checkout and many others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Transaction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A transaction includes one or more database access operations forming a single unit of business work that either should be completed in its entirety or not done at all&lt;/p&gt;

&lt;p&gt;If all the operations of a transaction are executed successfully, the transaction will be committed and the changes mades by its operations are going to be kept and persisted on the target database, on the other hand, if any operation fails the database will be rolled back to its initial state as if there was no execution of the transaction.&lt;/p&gt;

&lt;p&gt;If the database operations in a transaction do not update the database but only retrieve data, the transaction is called a read-only transaction, otherwise it is known as read-write transaction &lt;/p&gt;

&lt;p&gt;You may wonder about how a transaction would look like in real life, in fact a transaction can either be specified in a higher level language like SQL or can be embedded within an application program.&lt;/p&gt;

&lt;p&gt;When specifying a transaction in SQL it is not mandatory to start it with the BEGIN or START statement, no matter what is the value of the auto commit option of the underlying database management system, however whenever we want to commit or rollback the transaction we must specify the COMMIT and ROLLBACK keywords for setting a clear boundaries to the transaction and preventing any confusion about that, here is an example : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ydcye2n358nb544sbpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ydcye2n358nb544sbpx.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When choosing to embed transaction within application programs, most of the cases we will benefit from an abstract management of the lifecycle related transactions, including committing and rollbacking, here is an example of a transaction in Spring Boot application : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw7mk9p377y06zf8dq95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw7mk9p377y06zf8dq95.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Desirable Properties of a Transaction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For a safe transaction processing, transactions should have 4 properties called ACID properties : &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Atomicity&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A transaction is unbreakable, it is an atomic unit of processing, Either all its operations are reflected properly on the database, or none are, it is the responsibility of the recovery subsystem of a DBMS to ensure the atomicity. If a transaction fails to complete for some reason, such as a system crash in the midst of execution, the recovery technique must undo any effects of the transaction on the database. On the other hand, write operations of a committed transaction must be eventually written to disk &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Consistency&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A transaction should be consistency preserving, meaning that should take the database from its initial state into a state respecting integrity constraints, this property is the responsibility of both the programmer would should not perform wrong operations in the transaction, nothing will prevent him from deleting the entire database in his transaction.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Isolation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A transaction should appear as though it is being executed in isolation from other transactions, even though many transactions are execut- ing concurrently, this property is enforced by the concurrency control subsystem of a DBMS.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Durability&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The changes applied to the database by a committed transaction must persist in the database, and must not be lost because of any failure. This is the responsibility of the recovery subsystem.&lt;/p&gt;

</description>
      <category>database</category>
      <category>concurrency</category>
      <category>performance</category>
      <category>backend</category>
    </item>
    <item>
      <title>Database Migrations : Flyway for Spring Boot projects</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Thu, 06 Jun 2024 18:18:50 +0000</pubDate>
      <link>https://dev.to/aharmaz/database-migrations-flyway-for-spring-boot-projects-2coi</link>
      <guid>https://dev.to/aharmaz/database-migrations-flyway-for-spring-boot-projects-2coi</guid>
      <description>&lt;p&gt;Like Liquibase, Flyway can be used in 2 main manners for database migrations in Spring Boot projects, either on application startup or in an independent way.&lt;/p&gt;

&lt;p&gt;In this post we will see how to configure Flyway and use it in the context of a Spring Boot project for both development and release phases, You can find examples in the repository available at : &lt;a href="https://github.com/Aharmaz/flyway-demo"&gt;Github Repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fundamental Concepts of Flyway&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Migration and Migration Script&lt;/em&gt; :&lt;/p&gt;

&lt;p&gt;A migration is the smallest coutable unit of change that Flyway can perform and register against a target database and it can involve one or more operations located on a file called migration script.&lt;/p&gt;

&lt;p&gt;A migration in Flyway is equivalent to a changeset in Liquibase, however a migration script in Flyway is intended to host only a single migration, and a migration script in Liquibase can host multiple changesets &lt;/p&gt;

&lt;p&gt;In Flyway executing a migration is equivalent to executing the migration script hosting that migration&lt;/p&gt;

&lt;p&gt;&lt;em&gt;flyway_schema_history Table&lt;/em&gt; : &lt;/p&gt;

&lt;p&gt;This is the table Flyway creates on a database and uses for keeping track of what migrations has been already executed against that target database so that it does not run the same migration multiple times and it will be aware of what migrations it should run at a point of time (those that haven't been registered on the history table).&lt;/p&gt;

&lt;p&gt;Each migration is identified by the filepath of the migration file where it is located. When Flyway runs a migration it calculate a checksum number for the content of that migration and stores it inside the flyway_schema_history table in order to make sure that the same migration won't be changed again &lt;/p&gt;

&lt;p&gt;If Flyway notices that a changeset has been modified after it has been applied by comparing the checksums, it will throw an error during the migration process&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Configuration&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This consists of a folder on the resources directory of the project, then populate it with the migration files, in the example I have named it db/migrations : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;V1_creating_persons_table.sql&lt;/li&gt;
&lt;li&gt;V2_inserting_rows_into_persons_table.sql&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuring Flyway to run migrations on application startup&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This kind of behavior is commonly used at the development phase when a developer needs to update the state of the database he has on his local machine. Spring Boot offers some auto-configurations for launching the Flyway migration when the application is started.&lt;/p&gt;

&lt;p&gt;For those auto-configuration to work we need to add some informations in the configuration file of the Spring Boot app related to local environments (application-local.yml) like the location of the database, the username and password of connection and the location of the migration files within the classpath of the application :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/flyway_demo
    username: postgres
    password: changemeinproduction
    driver-class-name: org.postgresql.Driver
  jpa:
    hibernate:
      ddl-auto: none
  flyway:
    locations: classpath:db/migrations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we should add flyway dependency in the pom.xml file of the project (build.gradle file if you are using Gradle) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.flywaydb&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;flyway-core&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when starting the application Spring Boot will notice the presence of the Flyway dependency on the classpath of the app and will uses the information in the configuration fi,le to trigger the auto-configuration that will be responsible for launching the migration process automcatically, here is an example of what you could see in the logs when the migration process is started :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2024-06-06T19:01:09.407+01:00  INFO 3229 --- [           main] org.flywaydb.core.FlywayExecutor         : Database: jdbc:postgresql://localhost:5432/flyway_demo (PostgreSQL 16.0)
2024-06-06T19:01:09.428+01:00  WARN 3229 --- [           main] o.f.c.internal.database.base.Database    : Flyway upgrade recommended: PostgreSQL 16.0 is newer than this version of Flyway and support has not been tested. The latest supported version of PostgreSQL is 15.
2024-06-06T19:01:09.450+01:00  INFO 3229 --- [           main] o.f.c.i.s.JdbcTableSchemaHistory         : Schema history table "public"."flyway_schema_history" does not exist yet
2024-06-06T19:01:09.453+01:00  INFO 3229 --- [           main] o.f.core.internal.command.DbValidate     : Successfully validated 2 migrations (execution time 00:00.015s)
2024-06-06T19:01:09.486+01:00  INFO 3229 --- [           main] o.f.c.i.s.JdbcTableSchemaHistory         : Creating Schema History table "public"."flyway_schema_history" ...
2024-06-06T19:01:09.538+01:00  INFO 3229 --- [           main] o.f.core.internal.command.DbMigrate      : Current version of schema "public": &amp;lt;&amp;lt; Empty Schema &amp;gt;&amp;gt;
2024-06-06T19:01:09.547+01:00  INFO 3229 --- [           main] o.f.core.internal.command.DbMigrate      : Migrating schema "public" to version "1 - creating persons table"
2024-06-06T19:01:09.606+01:00  INFO 3229 --- [           main] o.f.core.internal.command.DbMigrate      : Migrating schema "public" to version "2 - inserting rows into persons table"
2024-06-06T19:01:09.647+01:00  INFO 3229 --- [           main] o.f.core.internal.command.DbMigrate      : Successfully applied 2 migrations to schema "public", now at version v2 (execution time 00:00.042s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configuring Flyway to run migrations independently from running the application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This behavior is used when a new release of the software using a database is ready to be deployed, and a number of migrations must be applied on that database.&lt;/p&gt;

&lt;p&gt;Maven provides a plugin for running Flyway migrations, this plugin will need to know about the database location, username and password of connection and the location of the migration files in the project folder, the way this can be configured is by adding a file on named flyway.conf to the resources folder of the Spring Boot project and then when adding the flyway maven plugin referencing the location of that file in the pom.xml :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyway.user=postgres
flyway.password=changemeinproduction
flyway.schemas=demo_flyway
flyway.url=jdbc:postgresql://localhost:5432/demo_flyway
flyway.locations=src/main/resources/db/migrations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;build&amp;gt;
  &amp;lt;plugins&amp;gt;
      &amp;lt;plugin&amp;gt;
         &amp;lt;groupId&amp;gt;org.flywaydb&amp;lt;/groupId&amp;gt;
         &amp;lt;artifactId&amp;gt;flyway-maven-plugin&amp;lt;/artifactId&amp;gt;
         &amp;lt;version&amp;gt;6.5.7&amp;lt;/version&amp;gt;
         &amp;lt;configuration&amp;gt;
           &amp;lt;configFiles&amp;gt;
             &amp;lt;configFile&amp;gt;
               src/main/resources/flyway.conf
             &amp;lt;/configFile&amp;gt;
           &amp;lt;/configFiles&amp;gt;
         &amp;lt;/configuration&amp;gt;
      &amp;lt;/plugin&amp;gt;
  &amp;lt;/plugins&amp;gt;
&amp;lt;/build&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want to launch the migration process using the Flyway maven plugin we must set the property spring.flyway.enabled to false so that when the application is started no migration is executed : &lt;/p&gt;

&lt;p&gt;Finally we should run the migrations using the following command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./mvnw flyway:migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and we will be able to see the following logs indicating that the migrations has been executed succesfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[INFO] Scanning for projects...
[INFO] 
[INFO] ---------------------&amp;lt; ma.demo.flyway:flyway-demo &amp;gt;---------------------
[INFO] Building flyway-demo 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- flyway-maven-plugin:6.5.7:migrate (default-cli) @ flyway-demo ---
[INFO] Flyway Community Edition 6.5.7 by Redgate
[INFO] Database: jdbc:postgresql://localhost:5432/flyway_demo (PostgreSQL 16.0)
[WARNING] Flyway upgrade recommended: PostgreSQL 16.0 is newer than this version of Flyway and support has not been tested. The latest supported version of PostgreSQL is 12.
[INFO] Creating schema "flyway_demo" ...
[INFO] Creating Schema History table "flyway_demo"."flyway_schema_history" ...
[INFO] Current version of schema "flyway_demo": null
[INFO] Migrating schema "flyway_demo" to version 1 - creating persons table
[INFO] Migrating schema "flyway_demo" to version 2 - inserting rows into persons table
[INFO] Successfully applied 2 migrations to schema "flyway_demo" (execution time 00:00.066s)
[WARNING] Flyway upgrade recommended: PostgreSQL 16.0 is newer than this version of Flyway and support has not been tested. The latest supported version of PostgreSQL is 12.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  0.797 s
[INFO] Finished at: 2024-06-06T19:04:59+01:00
[INFO] ------------------------------------------------------------------------
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Flyway simplifies database migrations by ensuring that changes are applied consistently, in the context of Spring Boot, it can be integrated easily to launch the migration process at the startup of the application and this helps developer focus more on building new features without worrying about the state databases.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Database Migrations : Liquibase for Spring Boot Projects</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Sun, 26 May 2024 13:01:21 +0000</pubDate>
      <link>https://dev.to/aharmaz/liquibase-for-spring-boot-projects-5hf6</link>
      <guid>https://dev.to/aharmaz/liquibase-for-spring-boot-projects-5hf6</guid>
      <description>&lt;p&gt;There are 2 ways with which Liquibase can be used in the context of a Spring Boot Project to perform database migrations, the first one is during the development phase where each developer will want add his own modifications and the modifications added by his collegues to his local database, the second one is during the deployment phase where we will want to gather all the modifications and run them against a database used by a released version of the project&lt;/p&gt;

&lt;p&gt;In this post we will cover how Liquibase can be integrated with a Spring Boot project to help use perfrom database migration both phases, and we will be using examples from the repository that you can check at : &lt;a href="https://github.com/Aharmaz/liquibase-demo"&gt;Github Repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fundamental Concepts of Liquibase&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Changeset and Migration File&lt;/em&gt; :&lt;/p&gt;

&lt;p&gt;A changeset is the smallest coutable unit of change that Liquibase can perfrom and register on a target database and it can contain one or more operations.&lt;/p&gt;

&lt;p&gt;A migration file is a file responsible for hosting one or more changesets.&lt;/p&gt;

&lt;p&gt;In Liquibase executing a changeset does not necessarly mean executing the migration file that contains that changeset, because it may contain other changesets that were not executed&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Changelog&lt;/em&gt; : &lt;/p&gt;

&lt;p&gt;A changelog is a file containing a list of ordered changesets, or references to migration files containing changesets&lt;/p&gt;

&lt;p&gt;&lt;em&gt;DATABASECHANGELOG Table&lt;/em&gt; : &lt;/p&gt;

&lt;p&gt;This is the table Liquibase creates on the target database and uses to keep track of what changesets have already been applied &lt;/p&gt;

&lt;p&gt;A changeset won't be executed a second time if it has already been executed and registered in the DATABASECHANGELOG table&lt;/p&gt;

&lt;p&gt;Each change set is identified in Liquibase by 3 properties, id, author, and filepath. When Liquibase executes a changeset it calculates a checksum for its content and stores it inside the DATABASECHANGELOG table with the 3 properties in order to make sure that a changeset has not been changed over time&lt;/p&gt;

&lt;p&gt;If Liquibase notices that a changeset has been modified after it has been applied using the checksum calculations it will throw an error or a warning&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Configuration&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The first thing to do is to add the migration scripts containing the changesets intended to be applied on the target database : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;V1__creating_schema.sql&lt;/li&gt;
&lt;li&gt;V2__add_category_column_to_books_table.sql&lt;/li&gt;
&lt;li&gt;V3__adding_authors_table.sql&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we need to add a changelog file referencing the migration scripts following a specific order&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8" ?&amp;gt;
&amp;lt;databaseChangeLog
        xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
                      http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.8.xsd"&amp;gt;
    &amp;lt;include file="db/migrations/V1__creating_schema.sql" /&amp;gt;
    &amp;lt;include file="db/migrations/V2__add_category_column_to_books_table.sql" /&amp;gt;
    &amp;lt;include file="db/migrations/V3__adding_authors_table.sql" /&amp;gt;
&amp;lt;/databaseChangeLog&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configuring Liquibase to run migrations on application startup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most of the time this behavior of running migrations on the application startup is used locally (when the spring boot application is executed with the local profile), this is why we should add information about the database and the location of the changelog file in the local properties file (application-local.properties in the example)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/demo_liquibase
    username: postgres
    password: changemeinproduction
    driver-class-name: org.postgresql.Driver
  jpa:
    hibernate:
      ddl-auto: none
  liquibase:
    change-log: classpath:/db/migrations/changelog.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to add the dependency of liquibase in the pom.xml file of the project (build.gradle file if you are using Gradle)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
  &amp;lt;groupId&amp;gt;org.liquibase&amp;lt;/groupId&amp;gt;
  &amp;lt;artifactId&amp;gt;liquibase-core&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When starting the application, Spring Boot will notice the presence of the liquibase dependency on the runtime classpath and will trigger the autoconfiguration classes related to liquibase, and an automatic migration process is going to be started against the configured database, here is an example of logging that we should get when starting the app :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2024-05-26T13:04:49.188+01:00  INFO 16644 --- [           main] liquibase.database                       : Set default schema name to public
2024-05-26T13:04:49.372+01:00  INFO 16644 --- [           main] liquibase.changelog                      : Creating database history table with name: public.databasechangelog
2024-05-26T13:04:49.418+01:00  INFO 16644 --- [           main] liquibase.changelog                      : Reading from public.databasechangelog
2024-05-26T13:04:49.476+01:00  INFO 16644 --- [           main] liquibase.lockservice                    : Successfully acquired change log lock
2024-05-26T13:04:49.478+01:00  INFO 16644 --- [           main] liquibase.command                        : Using deploymentId: 6725089478
2024-05-26T13:04:49.480+01:00  INFO 16644 --- [           main] liquibase.changelog                      : Reading from public.databasechangelog
Running Changeset: db/migrations/V1__creating_schema.sql::1::aymane
2024-05-26T13:04:49.507+01:00  INFO 16644 --- [           main] liquibase.changelog                      : Custom SQL executed
2024-05-26T13:04:49.510+01:00  INFO 16644 --- [           main] liquibase.changelog                      : ChangeSet db/migrations/V1__creating_schema.sql::1::aymane ran successfully in 18ms
Running Changeset: db/migrations/V2__add_category_column_to_books_table.sql::1::aymane
2024-05-26T13:04:49.523+01:00  INFO 16644 --- [           main] liquibase.changelog                      : Custom SQL executed
2024-05-26T13:04:49.525+01:00  INFO 16644 --- [           main] liquibase.changelog                      : ChangeSet db/migrations/V2__add_category_column_to_books_table.sql::1::aymane ran successfully in 5ms
Running Changeset: db/migrations/V3__adding_authors_table.sql::1::aymane
2024-05-26T13:04:49.540+01:00  INFO 16644 --- [           main] liquibase.changelog                      : Custom SQL executed
2024-05-26T13:04:49.542+01:00  INFO 16644 --- [           main] liquibase.changelog                      : ChangeSet db/migrations/V3__adding_authors_table.sql::1::aymane ran successfully in 12ms
2024-05-26T13:04:49.547+01:00  INFO 16644 --- [           main] liquibase.util                           : UPDATE SUMMARY
2024-05-26T13:04:49.547+01:00  INFO 16644 --- [           main] liquibase.util                           : Run:                          3
2024-05-26T13:04:49.547+01:00  INFO 16644 --- [           main] liquibase.util                           : Previously run:               0
2024-05-26T13:04:49.547+01:00  INFO 16644 --- [           main] liquibase.util                           : Filtered out:                 0
2024-05-26T13:04:49.548+01:00  INFO 16644 --- [           main] liquibase.util                           : -------------------------------
2024-05-26T13:04:49.548+01:00  INFO 16644 --- [           main] liquibase.util                           : Total change sets:            3
2024-05-26T13:04:49.548+01:00  INFO 16644 --- [           main] liquibase.util                           : Update summary generated
2024-05-26T13:04:49.549+01:00  INFO 16644 --- [           main] liquibase.command                        : Update command completed successfully.
Liquibase: Update has been successful. Rows affected: 3
2024-05-26T13:04:49.555+01:00  INFO 16644 --- [           main] liquibase.lockservice                    : Successfully released change log lock
2024-05-26T13:04:49.557+01:00  INFO 16644 --- [           main] liquibase.command                        : Command execution complete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configuring Liquibase to run migrations independently from running the application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This behavior is used at the deployment phase when we will want to grab all the migration scripts added since the last release and execute them against database deployed on dev, staging or production environment&lt;/p&gt;

&lt;p&gt;For that There is a Maven plugin for liquibase that can be used, but before adding it we should add a configuration file liquibase.yml to contain information about the target database and the location of the changelog file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;url: jdbc:postgresql://localhost:5432/demo_liquibase
username: postgres
password: changemeinproduction
driver: org.postgresql.Driver
changeLogFile: src/main/resources/db/migrations/changelog.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we should add the plugin to the pom file in the build section :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;build&amp;gt;
        &amp;lt;plugins&amp;gt;
            &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;org.liquibase&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;liquibase-maven-plugin&amp;lt;/artifactId&amp;gt;
                &amp;lt;version&amp;gt;4.5.0&amp;lt;/version&amp;gt;
                &amp;lt;configuration&amp;gt;
                    &amp;lt;propertyFile&amp;gt;
                        src/main/resources/liquibase.yml
                    &amp;lt;/propertyFile&amp;gt;
                &amp;lt;/configuration&amp;gt;
            &amp;lt;/plugin&amp;gt;
        &amp;lt;/plugins&amp;gt;
&amp;lt;/build&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An important thing is to remember to disable triggering the liquibase migration process on application startup when starting the application with a profile like integration or production (not local profile), here is an example of the application-prod.yml file :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/demo_liquibase
    username: postgres
    password: changemeinproduction
    driver-class-name: org.postgresql.Driver
  jpa:
    hibernate:
      ddl-auto: none
  liquibase:
    enabled: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we can use the following command to trigger the migration process :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./mvnw liquibase:update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using Liquibase with Spring Boot offers a robust solution for managing database changes in a controlled and efficient manner. It enables developers to focus on delivering features without worrying about the complexities of database migrations, making it an essential tool for any Spring Boot-based project.&lt;/p&gt;

</description>
      <category>java</category>
      <category>springboot</category>
      <category>liquibase</category>
      <category>database</category>
    </item>
    <item>
      <title>Database Migrations : From Manual to Automated Management.</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Fri, 24 May 2024 20:31:00 +0000</pubDate>
      <link>https://dev.to/aharmaz/database-migrations-from-manual-to-automated-management-5ffj</link>
      <guid>https://dev.to/aharmaz/database-migrations-from-manual-to-automated-management-5ffj</guid>
      <description>&lt;p&gt;A database schema refers to the structure of its tables, their relationships, views, indexes, triggers as well as other objects, And as developers, we often want to perform modifications on that structure to keep it in sync with the new features of the software using it, these modification are called database migrations because they migrate the database from one state into another.&lt;br&gt;
There are various approaches to execute those migrations, and in this post we will cover both the old-school and modern approaches for running them.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Database Migration Context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a real-world project, each developer will typically have a local database on his local machine for testing the features he will implement, and there are other databases deployed on environments like development, staging and production.&lt;/p&gt;

&lt;p&gt;During the development phase, each developer will need to run migrations on his local database, and when preparing the release of a new version of a software, the approved migrations since the last release must be gathered and executed against the concerned database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0av8kewa1atjz7fu42hl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0av8kewa1atjz7fu42hl.png" alt="Real-world project with muliple database environments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Old-School Approach of Running Database Migrations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Back in the day, developers wrote SQL scripts and manually ran them directly against their local databases, and these scripts were shared in the remote repository so that other members can use them, and during the release, the person in charge would manually execute those scripts on the target database.&lt;/p&gt;

&lt;p&gt;This approach was challenging, Keeping track of what migrations script has been already executed against which database was really hard because there was no way to know the actual state of the target database and as a result it was common to run a script more than once against the same database or forget to execute a specific script, Also there was no framework for specifying the order in which the migration scripts must be executed, those situations promoted a fear among developers and the persons responsible for deployments when they have to deal with migrations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managed Migration to the rescue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tools for managing database migrations have gained popularity across different programming ecosystems. These tools were introduced to manage the migration process and reduce the effort required from developers.&lt;/p&gt;

&lt;p&gt;The concept behind these tools is to provide a clean and robust way to structure the migrations against multiple databases by enforcing the specification of ordering and database versioning, developers won't need to directly run the migrations against databases anymore, instead they will configure these tools to do the work for them in an intelligent manner ensuring that : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A migration script is executed only once against a database&lt;/li&gt;
&lt;li&gt;The migrations scripts are executed in the configured order &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under the hood these tools create a dedicated table on the target database, and each time a migration is executed, a record is added to this table with details about that migration, so that when the migration process is initiated again there will be a check on that table to determine which migrations have already been applied and which are pending.&lt;/p&gt;

&lt;p&gt;For the Java ecosystem the most commonly used ones are Flyway and Liquibase, for the python ecosystem there is Alembic and for the JavaScript world there is Knex.js and TypeORM. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajdscp3huzgtkmu4avkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajdscp3huzgtkmu4avkw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration of Managed Migration Tools at the Development and the Deployment phase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Managed database migration tools can be configured to launch the migration process either at application startup or independently from the software they are related to.&lt;/p&gt;

&lt;p&gt;During the development phase, developers frequently need to edit the schema of their local databases. Therefore, it is recommended to configure these managed tools to start the migration as part of the application startup process, so developers don’t have to launch it manually repeatedly.&lt;/p&gt;

&lt;p&gt;During the deployment phase, migrations need to be launched only once. Thus, running the migration manually using a single command is recommended.&lt;/p&gt;

&lt;p&gt;In real-world projects, both approaches are used together. You can configure how you want the migration to be launched (at application startup or independently) based on the profile with which the application is executed and the environment on which the software is running. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Database migrations are essential for keeping your database schema in sync with evolving application features. Transitioning from manual to automated migrations simplifies this process and reduces errors.&lt;/p&gt;

&lt;p&gt;By using these automated migration tools, you can integrate migrations into your development and deployment workflows, making database management more efficient. This allows you to focus on developing new features without worrying about migration issues.&lt;/p&gt;

</description>
      <category>database</category>
      <category>java</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Spring Boot Testing : Basic Concepts and Categories of Tests</title>
      <dc:creator>Aymane Harmaz</dc:creator>
      <pubDate>Wed, 01 May 2024 18:27:12 +0000</pubDate>
      <link>https://dev.to/aharmaz/introduction-to-spring-boot-testing-m0</link>
      <guid>https://dev.to/aharmaz/introduction-to-spring-boot-testing-m0</guid>
      <description>&lt;p&gt;Spring Boot provides many features for testing purpose, and with this large panel of choices, Developers might get confused about each one’s intend, leading them to use features that are not suitable for specific situations, and as a result they end up with tests taking longer to run than intended. This blog aims to shed lights on the different ways with which we can test a modern Spring Boot application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spring Boot Starter for Testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most of the dependencies that are going to be used for testing, are provided by the Spring Boot team in the format of a starter, in Maven this starter is identified by the following attributes :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;  
  &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;  
  &amp;lt;artifactId&amp;gt;spring-boot-starter-test&amp;lt;/artifactId&amp;gt;  
  &amp;lt;version&amp;gt;{spring.boot.starter.test.version}&amp;lt;/version&amp;gt;  
  &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;  
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Categories of Tests in Spring Boot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Unit Test&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Unit Test is about testing the behaviors or the methods of a class in isolation of the classes on which it depends.&lt;/p&gt;

&lt;p&gt;In the context of Spring Boot, a unit test makes sense for only some classes of the application such as business logic or mapping classes.&lt;/p&gt;

&lt;p&gt;Writing and running unit tests does not require Spring Boot, instead it only needs tools like JUnit, Mockito, and AssertJ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Integration Test&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unit tests are not enough, sometimes we will need to test behaviors that involve more than one class, examples for these could be testing the web layer and that the endpoints respond correcltly to the requests or testing the data layer and that data is stored correctly into a database, that is where integration tests come into play.&lt;/p&gt;

&lt;p&gt;An integration test requires Spring Boot to start the application, trigger the auto-configurations as well as loading the application context, that is why Spring Boot is a must when it comes to integration tests in addition to tools such as JUnit, Mockito and AssertJ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Categories of Integration Tests in Spring Boot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Broad Integration Test (End To End Test)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Broad Integration Test is about testing a behavior that involves vlasses accross mostly all layers of the application used for serving a client request, from controllers to repositories.&lt;/p&gt;

&lt;p&gt;Spring Boot provides the @SpringBootTest annotation for creating the application context after scanning the bean defintions from the user configuration and auto-configuration classes.&lt;/p&gt;

&lt;p&gt;A broad integration test can either be executed using a mock web environment or using a real web server, the difference is that with the second approach we will have to send real HTTP requests for tests.&lt;br&gt;
We generally don’t have to connect to extenal services or external databases, a web service should be mocked and a database is always replaced with an in-memory database or even with a real instance running inside a Docker conrainer using Testcontainers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Narrow Integration Test (Component Test)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Narrow Integration Test is about testing a behavior involving only classes from a specific layer of the application, such as testing the web layer only or the data layer only. the good thing about those tests is that Spring Boot will make the application context scan for only parts of the user configuration and parts of the auto-configurations, and as a result only a slice of the configuration will be loaded in the application context, this makes the test much faster than loading all the configuration like what is performed with a Broad Integration Test.&lt;/p&gt;

&lt;p&gt;The most commonly used annotations provided by Spring Boot for narrow integration testing are :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;@WebMvcTest for testing the web layer&lt;/li&gt;
&lt;li&gt;@DataJpaTest for testing the data layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are other annotations for other parts such as @JsonTest, @WebFluxTest, @DataJdbcTest, @DataMongoTest and @RestClientTest&lt;/p&gt;

</description>
      <category>java</category>
      <category>springboot</category>
      <category>spring</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
