<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nikolay Stanchev</title>
    <description>The latest articles on DEV Community by Nikolay Stanchev (@nikolay_stanchev).</description>
    <link>https://dev.to/nikolay_stanchev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nikolay_stanchev"/>
    <language>en</language>
    <item>
      <title>Integrating a Java REST API With a Database</title>
      <dc:creator>Nikolay Stanchev</dc:creator>
      <pubDate>Sun, 31 Jul 2022 15:35:25 +0000</pubDate>
      <link>https://dev.to/nikolay_stanchev/integrating-a-java-rest-api-with-a-database-17e1</link>
      <guid>https://dev.to/nikolay_stanchev/integrating-a-java-rest-api-with-a-database-17e1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This article is a follow-up from my last &lt;a href="https://dev.tod"&gt;tutorial&lt;/a&gt; on building a fully functional Java REST API for managing TODO tasks. For the purpose of simplicity, last time we used an in-memory database as an implementation of the storage interface defined by our business logic. It is clear that this solution is not feasible for a production-ready API mainly because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;we will lose the stored tasks every time we stop the application&lt;/li&gt;
&lt;li&gt;different application instances will not be able to share memory - in other words if we vertically scale up our service by using two servers - A and B, then tasks created through server A will not be seen by tasks created through server B.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is illustrated in the diagram below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lp6vw97g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1659095308914/IXVI2Eq3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lp6vw97g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1659095308914/IXVI2Eq3t.png" alt="Screenshot 2022-07-29 at 14.47.59.png" width="880" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial I want to demonstrate how we can integrate the API we built as part of the previous &lt;a href="https://dev.to/nikolay_stanchev/step-by-step-tutorial-for-building-a-rest-api-in-java-2fna"&gt;article&lt;/a&gt; with a very famous open-source non-relational database, namely MongoDB. This means changing the diagram above to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rjNMLJ4u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1659095508226/BJRMFnNot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rjNMLJ4u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1659095508226/BJRMFnNot.png" alt="Screenshot 2022-07-29 at 14.51.16.png" width="880" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the purpose of simplicity, I chose MongoDB, but technically we could have also chosen a relational database such as PostgreSQL. If we were to choose between the two DBs , we would need to know a bit more about the software requirements of our service - things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scalability - how much is the application expected to grow in the future&lt;/li&gt;
&lt;li&gt;expected traffic pattern - how many clients are going to use the service, what request rate they will use, read vs write ratio, etc. &lt;/li&gt;
&lt;li&gt;access patterns - do we have a well-defined access pattern requested by our business stakeholders or do we need an approach that allows for general search queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API specification is not enough on its own to make this decision, but for the purpose of this tutorial, we can go ahead and use MongoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current State
&lt;/h2&gt;

&lt;p&gt;A quick reminder of what we built before is given below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a repository interface defining the functionality our business logic requires for storing and retrieving tasks
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface TaskManagementRepository {

    void save(Task task);

    List&amp;lt;Task&amp;gt; getAll();

    Optional&amp;lt;Task&amp;gt; get(String taskID);

    void delete(String taskID);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;an in-memory DB implementation of the repository interface that uses a hash map to store and retrieve tasks
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class InMemoryTaskManagementRepository implements TaskManagementRepository {

    private Map&amp;lt;String, Task&amp;gt; tasks = new HashMap&amp;lt;&amp;gt;();

    @Override
    public void save(Task task) {
        tasks.put(task.getIdentifier(), task);
    }

    @Override
    public List&amp;lt;Task&amp;gt; getAll() {
        return tasks.values().stream()
                .collect(Collectors.toUnmodifiableList());
    }

    @Override
    public Optional&amp;lt;Task&amp;gt; get(String taskID) {
        return Optional.ofNullable(tasks.get(taskID));
    }

    @Override
    public void delete(String taskID) {
        tasks.remove(taskID);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;a Guice binding that ensures every piece of business logic that needs to use a &lt;strong&gt;TaskManagementRepository&lt;/strong&gt; instance will use the same &lt;strong&gt;InMemoryTaskManagementRepository&lt;/strong&gt; instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ApplicationModule extends AbstractModule {

    @Override
    public void configure() {
        bind(TaskManagementRepository.class).to(InMemoryTaskManagementRepository.class).in(Singleton.class);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Target State
&lt;/h2&gt;

&lt;p&gt;The goal of this tutorial is to implement a new class called &lt;strong&gt;MongoDBTaskManagementRepository&lt;/strong&gt; which will be an implementation of the repository interface that uses MongoDB for persistent storage.&lt;/p&gt;

&lt;p&gt;We are going to have the DB running as a separate Docker container. Therefore, we will now have an application made of two containers - the service container and the DB container. We will use &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; to manage the multi-container application as a single entity. Keep in mind that this is not a Docker-focused tutorial, so the focus won't be on explaining best practices for setting up a database with Docker or how to use Docker and Docker Compose in general. Feel free to leave a comment if you would like to see a separate article for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  MongoDB Container Setup
&lt;/h3&gt;

&lt;p&gt;We start by pulling a MongoDB Docker image and setting up our Docker Compose file to start two containers - one for the service itself and one for a MongoDB instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull mongo:5.0.9

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Docker Compose file is given below. Notice that we mount a persistent volume to the MongoDB container since we don't want the data to be wiped out whenever the container crashes or is rebuilt). What this means is that any data stored in &lt;code&gt;/data/db&lt;/code&gt; on the MongoDB container will be stored under &lt;code&gt;data/mongo&lt;/code&gt; (relative to the project root folder) on your laptop. Keep in mind that &lt;code&gt;/data/db&lt;/code&gt; is where the MongoDB container stores data by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.9"
services:
  mongo:
    image: "mongo:5.0.9"
    volumes:
      - .data/mongo:/data/db
  webapp:
    build: .
    depends_on:
      - "mongo"
    ports:
      - "8080:8080"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To start the application, we run &lt;code&gt;docker-compose up --build -d&lt;/code&gt; which effectively does three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creates a container running the MongoDB image - using the default MongoDB port 27017 for connections &lt;/li&gt;
&lt;li&gt;re-builds the image for our REST API and creates a container running the service - the container will have port 8080 exposed so that you can connect from your laptop to the API &lt;/li&gt;
&lt;li&gt;creates a virtual network connecting the two containers &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To stop the application and cleanup the resources (containers and network), run &lt;code&gt;docker-compose down&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/587c6fb71c6ddb30f066bfd91aa345a9c7dd6a3e"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Repository Interface Implementation
&lt;/h3&gt;

&lt;p&gt;Now that we have a MongoDB instance running, we need to create a new class - &lt;strong&gt;MongoDBTaskManagementRepository&lt;/strong&gt; - that will implement the repository interface.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MongoDBTaskManagementRepository implements TaskManagementRepository {

    private final MongoCollection&amp;lt;MongoDBTask&amp;gt; tasksCollection;

    @Inject
    public MongoDBTaskManagementRepository(MongoCollection&amp;lt;MongoDBTask&amp;gt; tasksCollection) {
        this.tasksCollection = tasksCollection;
    }

    @Override
    public void save(Task task) {
        MongoDBTask mongoDBTask = toMongoDBTask(task);
        ReplaceOptions replaceOptions = new ReplaceOptions()
                .upsert(true);
        tasksCollection.replaceOne(eq("_id", task.getIdentifier()), mongoDBTask, replaceOptions);
    }

    @Override
    public List&amp;lt;Task&amp;gt; getAll() {
        FindIterable&amp;lt;MongoDBTask&amp;gt; mongoDBTasks = tasksCollection.find();

        List&amp;lt;Task&amp;gt; tasks = new ArrayList&amp;lt;&amp;gt;();
        for (MongoDBTask mongoDBTask: mongoDBTasks) {
            tasks.add(fromMongoDBTask(mongoDBTask));
        }

        return tasks;
    }

    @Override
    public Optional&amp;lt;Task&amp;gt; get(String taskID) {
        Optional&amp;lt;MongoDBTask&amp;gt; mongoDBTask = Optional.ofNullable(
                tasksCollection.find(eq("_id", taskID)).first());

        return mongoDBTask.map(this::fromMongoDBTask);
    }

    @Override
    public void delete(String taskID) {
        Document taskIDFilter = new Document("_id", taskID);
        tasksCollection.deleteOne(taskIDFilter);
    }

    private Task fromMongoDBTask(MongoDBTask mongoDBTask) {
        return Task.builder(mongoDBTask.getTitle(), mongoDBTask.getDescription())
                .withIdentifier(mongoDBTask.getIdentifier())
                .withCompleted(mongoDBTask.isCompleted())
                .withCreatedAt(Instant.ofEpochMilli(mongoDBTask.getCreatedAt()))
                .build();
    }

    private MongoDBTask toMongoDBTask(Task task) {
        MongoDBTask mongoDBTask = new MongoDBTask();
        mongoDBTask.setIdentifier(task.getIdentifier());
        mongoDBTask.setTitle(task.getTitle());
        mongoDBTask.setDescription(task.getDescription());
        mongoDBTask.setCreatedAt(task.getCreatedAt().toEpochMilli());
        mongoDBTask.setCompleted(task.isCompleted());

        return mongoDBTask;
    }

    private static class MongoDBTask {
        @BsonProperty("_id")
        private String identifier;

        private String title;

        private String description;

        @BsonProperty("created_at")
        private long createdAt;

        private boolean completed;
        ...
    }
...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Given that this is not a MongoDB-specific tutorial, I skipped a few implementation details around the Mongo DB Java driver, but in essence, we are using a very basic implementation that maps the internal &lt;code&gt;MongoDBTask&lt;/code&gt; POJO to a Mongo DB document. For more details, see the official &lt;a href="https://www.mongodb.com/developer/languages/java/java-mapping-pojos/"&gt;tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/b79ccab2341c4463c18a2905d5b58a87c13e989f"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Binding Everything Together
&lt;/h3&gt;

&lt;p&gt;You might have noticed that the MongoDB repository implementation relies on a &lt;code&gt;MongoCollection&amp;lt;MongoDBTask&amp;gt;&lt;/code&gt; instance to be injected. This class is an abstraction for interacting with an actual MongoDB collection. In this section, we will write the code for connecting to the database and binding an instance of this class. Since we are using Guice for dependency injection, we will encapsulate this logic into its own Guice module and then use the new module in the main application module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MongoDBModule extends AbstractModule {

    @Provides
    private MongoCollection&amp;lt;MongoDBTaskManagementRepository.MongoDBTask&amp;gt; provideMongoCollection() {
        ConnectionString connectionString = new ConnectionString(System.getenv("MongoDB_URI"));

        CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
        CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);

        MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
                .applyConnectionString(connectionString)
                .codecRegistry(codecRegistry)
                .build();

        MongoClient mongoClient = MongoClients.create(mongoClientSettings);
        MongoDatabase mongoDatabase = mongoClient.getDatabase("tasks_management_db");
        return mongoDatabase.getCollection("tasks", MongoDBTaskManagementRepository.MongoDBTask.class);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use this new module, we need to install it in our main application module and change the binding to use the MongoDB repository implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ApplicationModule extends AbstractModule {

    @Override
    public void configure() {
        bind(TaskManagementRepository.class).to(MongoDBTaskManagementRepository.class).in(Singleton.class);

        install(new MongoDBModule());
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see in the MongoDB module that we are using a database with name &lt;strong&gt;tasks_management_db&lt;/strong&gt; and a collection with name &lt;strong&gt;tasks&lt;/strong&gt; but we never actually created these. The reason is that MongoDB will automatically create both the database and the collection as soon as we insert the first document (when we create the first task through our API).&lt;/p&gt;

&lt;p&gt;The final piece for binding everything together is to configure the &lt;strong&gt;MongoDB_URI&lt;/strong&gt; environment variable used for connecting to the database in the task management service container. This can be done through the Docker Compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.9"
services:
  webapp:
    ...
    environment:
      - MongoDB_URI=mongodb://taskmanagementservice_mongo_1:27017
    ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The URI is built based on the default port the MongoDB container uses and the default hostname convention Docker Compose uses when building containers.&lt;/p&gt;

&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/9e963b130976c29ef4f1b537e891d2a451d67cba"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing The Service
&lt;/h2&gt;

&lt;p&gt;Before we start testing, let's re-build the full stack from scratch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose down
docker-compose up --build -d

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, both the service and MongoDB should be up and running. We will once again use curl to test the CRUD API (ideally these manual tests should be automated as integration tests but we will leave that for another article where we focus on testing):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creating a few tasks
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks" 

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/cb06d6a1-960b-47eb-b44b-de0b01303020
Content-Length: 0
Date: Fri, 29 Jul 2022 09:20:25 GMT

curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks" 

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/3c4c7a7d-b680-4be5-9dd4-51d02225a700
Content-Length: 0
Date: Fri, 29 Jul 2022 09:20:32 GMT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;retrieving a task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X GET "http://localhost:8080/api/tasks/cb06d6a1-960b-47eb-b44b-de0b01303020"

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 159
Date: Fri, 29 Jul 2022 09:21:04 GMT

{"identifier":"cb06d6a1-960b-47eb-b44b-de0b01303020","title":"test-title","description":"description","createdAt":"2022-07-29T09:20:25.410Z","completed":false}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;retrieving a non-existing task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X GET "http://localhost:8080/api/tasks/random-task-id-123"

HTTP/1.1 404 
Content-Type: application/json
Content-Length: 81
Date: Fri, 29 Jul 2022 09:21:19 GMT

{"message":"Task with the given identifier cannot be found - random-task-id-123"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;retrieving all tasks
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X GET "http://localhost:8080/api/tasks" 

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 321
Date: Fri, 29 Jul 2022 09:21:30 GMT

[{"identifier":"cb06d6a1-960b-47eb-b44b-de0b01303020","title":"test-title","description":"description","createdAt":"2022-07-29T09:20:25.410Z","completed":false},{"identifier":"3c4c7a7d-b680-4be5-9dd4-51d02225a700","title":"test-title","description":"description","createdAt":"2022-07-29T09:20:32.851Z","completed":false}]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;deleting a task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X DELETE "http://localhost:8080/api/tasks/3c4c7a7d-b680-4be5-9dd4-51d02225a700"                             

HTTP/1.1 204 
Date: Fri, 29 Jul 2022 09:22:11 GMT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;patching a task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X PATCH -H "Content-Type:application/json" -d "{\"completed\": true, \"title\": \"new-title\", \"description\":\"new-description\"}" "http://localhost:8080/api/tasks/cb06d6a1-960b-47eb-b44b-de0b01303020"

HTTP/1.1 200 
Content-Length: 0
Date: Fri, 29 Jul 2022 09:22:34 GMT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So far, all the tests we executed look pretty much the same as the ones executed in the previous &lt;a href="https://nsnotes.hashnode.dev/step-by-step-tutorial-for-building-a-rest-api-in-java#heading-testing-the-service"&gt;article&lt;/a&gt;. However, we know that previously, restarting the service implied losing the data. Let's try this now by rebuilding the full stack again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose down
docker-compose up --build -d


curl -i -X GET "http://localhost:8080/api/tasks" 

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 163
Date: Fri, 29 Jul 2022 09:27:25 GMT

[{"identifier":"cb06d6a1-960b-47eb-b44b-de0b01303020","title":"new-title","description":"new-description","createdAt":"2022-07-29T09:20:25.410Z","completed":true}]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, the data is now persistently stored - we still have the original task we initially created and then updated as part of testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In conclusion, what we've done as part of this tutorial is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;setup a MongoDB container&lt;/li&gt;
&lt;li&gt;use Docker Compose to manage a multi-container application&lt;/li&gt;
&lt;li&gt;integrate a Java REST API with a MongoDB database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key point I was trying to show in this article was the fact that all we had to do to switch from an in-memory DB implementation to a proper DB technology was to create a new implementation of the repository interface, configure the connection to the DB inside a new guice module and then change one line in the already existing code so that our business logic will now use the new repository implementation.&lt;/p&gt;

&lt;p&gt;We changed the guice binding from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind(TaskManagementRepository.class).to(InMemoryTaskManagementRepository.class).in(Singleton.class);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind(TaskManagementRepository.class).to(MongoDBTaskManagementRepository.class).in(Singleton.class);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We never touched any of the core business logic for our API and this is the beauty of clean architecture - we don't couple our code with the fact that we chose to persist our data in a database rather than in-memory. This is an implementation detail and changing our storage mechanism should not affect the rest of our code.&lt;/p&gt;

</description>
      <category>mongodb</category>
    </item>
    <item>
      <title>Step-By-Step Tutorial for Building a REST API in Java</title>
      <dc:creator>Nikolay Stanchev</dc:creator>
      <pubDate>Tue, 28 Jun 2022 17:27:37 +0000</pubDate>
      <link>https://dev.to/nikolay_stanchev/step-by-step-tutorial-for-building-a-rest-api-in-java-2fna</link>
      <guid>https://dev.to/nikolay_stanchev/step-by-step-tutorial-for-building-a-rest-api-in-java-2fna</guid>
      <description>&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;Having seen many tutorials on how to build REST APIs in Java using various combinations of frameworks and libraries, I decided to build my own API using the software suite that I have the most experience with. In particular, I wanted to use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://maven.apache.org/"&gt;Maven&lt;/a&gt; as the build and dependency management tool&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://eclipse-ee4j.github.io/jersey/"&gt;Jersey&lt;/a&gt; as the framework that provides implementation of the &lt;a href="https://jakarta.ee/specifications/restful-ws/"&gt;JAX-RS&lt;/a&gt; specification&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://tomcat.apache.org/"&gt;Tomcat&lt;/a&gt; as the application server

&lt;ul&gt;
&lt;li&gt;in particular, I wanted to run Tomcat in embedded mode so that I would end up with a simple executable jar file&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/google/guice"&gt;Guice&lt;/a&gt; as the dependency injection framework&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem I faced was that I couldn't find any tutorials combining the software choices above, so I had to go through the process of combining the pieces myself. This didn't turn out to be a particularly straightforward task, which is why I decided to document the process on my blog and share it with others who might be facing similar problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Summary
&lt;/h2&gt;

&lt;p&gt;For the purpose of this tutorial, we are going to build the standard API for managing TODO items - i.e. a CRUD API that supports the functionalities of &lt;strong&gt;C&lt;/strong&gt; reating, &lt;strong&gt;R&lt;/strong&gt; etrieving, &lt;strong&gt;U&lt;/strong&gt; pdating and &lt;strong&gt;D&lt;/strong&gt; eleting tasks.&lt;/p&gt;

&lt;p&gt;The API specification is given below: &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ILi2AVAU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655969913570/j5ms9vDkX.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ILi2AVAU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655969913570/j5ms9vDkX.png" alt="Screenshot 2022-06-23 at 10.37.36.png" width="880" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full specification can be viewed in the Appendix.&lt;/p&gt;

&lt;p&gt;To implement this API, we will use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java 11 (OpenJDK)&lt;/li&gt;
&lt;li&gt;Apache Maven v3.8.6 &lt;/li&gt;
&lt;li&gt;Ecplipse Jersey v2.35&lt;/li&gt;
&lt;li&gt;Apache Tomcat v9.0.62&lt;/li&gt;
&lt;li&gt;Guice v4.2.3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the purpose of simplicity, I will avoid the use of any databases as part of this tutorial and instead use a pseudo in-memory DB. However, we will see how easy it is to switch from an in-memory testing DB to an actual database when following a &lt;a href="https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html"&gt;clean architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The goal is to end up with an executable jar file generated by Maven that will include the Tomcat application server and our API implementation. We will then dockerize the entire process of generating the file and executing it, and finally run the service as a Docker container.&lt;/p&gt;

&lt;p&gt;The following coding steps will only outline the most relevant pieces of code for the purpose of this tutorial, but you can find the full code in the &lt;a href="https://github.com/nikist97/TaskManagementService"&gt;GitHub repository&lt;/a&gt;. For most steps, we will add unit tests that won't be referenced here but included in the code change itself. To run the tests at any given point in time, you can use &lt;code&gt;mvn clean test&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 - Project Setup
&lt;/h3&gt;

&lt;p&gt;As with every Maven project, we need a POM file (the file representing the &lt;a href="https://maven.apache.org/pom.html#What_is_the_POM"&gt;&lt;strong&gt;P&lt;/strong&gt; roject &lt;strong&gt;o&lt;/strong&gt; bject &lt;strong&gt;M&lt;/strong&gt; odel&lt;/a&gt;). We start with a very basic POM which describes the project information and sets the JDK and JRE target versions to 11. This means that the project can use Java 11 language features (but no features from later versions) and will require a JRE version 11 or later to be executed. To avoid registering a domain name for this example project, I am using a group ID that corresponds to my GitHub username where this project will be hosted - &lt;code&gt;com.github.nikist97&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;

&amp;lt;project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&amp;gt;
    &amp;lt;modelVersion&amp;gt;4.0.0&amp;lt;/modelVersion&amp;gt;

    &amp;lt;!-- Project Information --&amp;gt;
    &amp;lt;groupId&amp;gt;com.github.nikist97&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;TaskManagementService&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.0-SNAPSHOT&amp;lt;/version&amp;gt;
    &amp;lt;packaging&amp;gt;jar&amp;lt;/packaging&amp;gt;

    &amp;lt;name&amp;gt;TaskManagementService&amp;lt;/name&amp;gt;

    &amp;lt;properties&amp;gt;
        &amp;lt;!-- Maven-related properties used during the build process --&amp;gt;
        &amp;lt;project.build.sourceEncoding&amp;gt;UTF-8&amp;lt;/project.build.sourceEncoding&amp;gt;
        &amp;lt;maven.compiler.source&amp;gt;11&amp;lt;/maven.compiler.source&amp;gt;
        &amp;lt;maven.compiler.target&amp;gt;11&amp;lt;/maven.compiler.target&amp;gt;
    &amp;lt;/properties&amp;gt;

    &amp;lt;dependencies&amp;gt;
        &amp;lt;!-- This is where we will declare libraries our project depends on --&amp;gt;
    &amp;lt;/dependencies&amp;gt;

    &amp;lt;build&amp;gt;
        &amp;lt;plugins&amp;gt;
            &amp;lt;!-- This is where we will declare plugins our project needs for the build process --&amp;gt;
        &amp;lt;/plugins&amp;gt;
    &amp;lt;/build&amp;gt;
&amp;lt;/project&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/36c4d1136d69cd0d2cb0ecee16b504d2f80b43d5"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Implementing the Business Logic
&lt;/h3&gt;

&lt;p&gt;We start with the most critical piece of software in general, which is our business logic. Ideally, this layer should be agnostic to the notion of any DB technologies or API protocols. Whether we implement an HTTP API using MongoDB on the backend or we use PostgreSQL and implement a command-line tool for interacting with our code, it should not affect the code for our business logic. In other words, the business logic should &lt;strong&gt;not depend&lt;/strong&gt; on the persistence layer (the code interacting with the database) and the API layer (the code that will define the HTTP API endpoints).&lt;/p&gt;

&lt;p&gt;The first thing to implement is our main entity class - &lt;strong&gt;Task&lt;/strong&gt;. This class follows the builder pattern and provides argument validation. The required attributes are the task's &lt;strong&gt;title&lt;/strong&gt; and &lt;strong&gt;description&lt;/strong&gt;. The rest of the attributes we can default to sensible values when not explicitly provided:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;identifier&lt;/strong&gt; is set to a random UUID&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;createdAt&lt;/strong&gt; is set to the current date time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;completed&lt;/strong&gt; is set to False
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Task {

    private final String identifier;
    private final String title;
    private final String description;
    private final Instant createdAt;
    private final boolean completed;

    ...

    public static class TaskBuilder {

        ...

        private TaskBuilder(String title, String description) {
            validateArgNotNullOrBlank(title, "title");
            validateArgNotNullOrBlank(description, "description");

            this.title = title;
            this.description = description;
            this.identifier = UUID.randomUUID().toString();
            this.createdAt = Instant.now();
            this.completed = false;
        }

        ...

    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we define the interface we need for interacting with a persistence layer (i.e. a database or another storage mechanism). Notice that this interface belongs to the business layer because, ultimately, it is the business logic that decides what storage functionality we will need. The actual implementation of this interface, though (a MongoDB implementation or an in-memory DB or something else) will belong to the persistence layer, which we will implement in a subsequent step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface TaskManagementRepository {

    void save(Task task);

    List&amp;lt;Task&amp;gt; getAll();

    Optional&amp;lt;Task&amp;gt; get(String taskID);

    void delete(String taskID);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we implement the service class, which has the CRUD logic. The critical piece here is that this class doesn't rely on a concrete implementation of the repository interface - it is agnostic to what DB technology we decide to use later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class TaskManagementService {

    private final TaskManagementRepository repository;

    ...

    public Task create(String title, String description) {
        Task task = Task.builder(title, description).build();

        repository.save(task);

        return task;
    }

    public Task update(String taskID, TaskUpdateRequest taskUpdateRequest) {
        Task oldTask = retrieve(taskID);

        Task newTask = oldTask.update(taskUpdateRequest);
        repository.save(newTask);

        return newTask;
    }

    public List&amp;lt;Task&amp;gt; retrieveAll() {
        return repository.getAll();
    }

    public Task retrieve(String taskID) {
        return repository.get(taskID).orElseThrow(() -&amp;gt;
                new TaskNotFoundException("Task with the given identifier cannot be found - " + taskID));
    }

    public void delete(String taskID) {
        repository.delete(taskID);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The way this code was written allows us to easily unit test our business logic in isolation by mocking the behavior of the repository interface. To achieve this, we will need to add two dependencies in the POM file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ...
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;junit&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;junit&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;4.11&amp;lt;/version&amp;gt;
            &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.mockito&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;mockito-core&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;3.5.13&amp;lt;/version&amp;gt;
            &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;
        ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/830233cb0b6cae95f66156e1794dee1c7cbdec7e"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Creating Stub API Endpoints
&lt;/h3&gt;

&lt;p&gt;The next step is to implement the API layer. For this project, we are implementing an HTTP REST API using Jersey. Therefore, we start by adding the dependency in the POM file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ...
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.glassfish.jersey.containers&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;jersey-container-servlet&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.35&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.glassfish.jersey.inject&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;jersey-hk2&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.35&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second dependency is needed after Jersey 2.26 - &lt;a href="https://eclipse-ee4j.github.io/jersey.github.io/release-notes/2.26.html"&gt;https://eclipse-ee4j.github.io/jersey.github.io/release-notes/2.26.html&lt;/a&gt; - following this version users need to explicitly declare the dependency injection framework for Jersey to use - in this case we go with HK2 which is what was used in previous releases.&lt;/p&gt;

&lt;p&gt;Then we implement the resource class, which at this point only has stub methods that all return a status code 200 HTTP response with no response body.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Path("/tasks")
public class TaskManagementResource {

    @POST
    public Response createTask() {
        return Response.ok().build();
    }

    @GET
    public Response getTasks() {
        return Response.ok().build();
    }

    @PATCH
    @Path("/{taskID}")
    public Response updateTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    public Response getTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {

        return Response.ok().build();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will also need an application config class to define the base URI for our API and to inform the framework about the task management resource class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/993b443e9821af374250d0eff34ccb9be81d0307"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 - Implementing the API Layer
&lt;/h3&gt;

&lt;p&gt;For this project, we will use JSON as the serialization data format for HTTP requests and repsonses.&lt;/p&gt;

&lt;p&gt;In order to produce and consume JSON in our API, we need to add a library that's going to be responsible for the JSON serialization and deserialization of POJOs. We are going to use Jackson. The library we need in order to integrate Jersy with Jackson is given below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ...
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.glassfish.jersey.media&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;jersey-media-json-jackson&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.35&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we need to customize the behavior of the JSON object mapper that will be used for serializing and deserializing the request and response POJOs. In this case, we disable ALLOW_COERCION_OF_SCALARS - this means that the service won't attempt to parse strings into numbers or booleans (e.g. &lt;code&gt;{"boolean_field":"true"}&lt;/code&gt; will be rejected)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Provider
public class JsonObjectMapperProvider implements ContextResolver&amp;lt;ObjectMapper&amp;gt; {

    private final ObjectMapper jsonObjectMapper;

    /**
     * Create a custom JSON object mapper provider.
     */
    public JsonObjectMapperProvider() {
        jsonObjectMapper = new ObjectMapper();
        jsonObjectMapper.disable(ALLOW_COERCION_OF_SCALARS);
    }

    @Override
    public ObjectMapper getContext(Class&amp;lt;?&amp;gt; type) {
        return jsonObjectMapper;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once again, we need to make Jersey aware of this provider class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
        register(JsonObjectMapperProvider.class);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we define the request and response POJOs. I will skip the code for these classes, but in summary, we need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TaskCreateRequest&lt;/strong&gt; - represents the JSON request body sent to the service when creating a new task&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TaskUpdateRequest&lt;/strong&gt; - represents the JSON request body sent to the service when updating an existing task &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TaskResponse&lt;/strong&gt; - represents the JSON response body sent to the client when retrieving task(s)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last part of this step is to replace the stub logic in the resource class with the actual API implementation that relies on the business logic encapsulated in the service class from step 2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Path("/tasks")
public class TaskManagementResource {

    private final TaskManagementService service;

    public TaskManagementResource(TaskManagementService service) {
        this.service = service;
    }

    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    public Response createTask(TaskCreateRequest taskCreateRequest) {
        validateArgNotNull(taskCreateRequest, "task-create-request-body");

        Task task = service.create(taskCreateRequest.getTitle(), taskCreateRequest.getDescription());

        String taskID = task.getIdentifier();

        URI taskRelativeURI = URI.create("tasks/" + taskID);
        return Response.created(taskRelativeURI).build();
    }

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public List&amp;lt;TaskResponse&amp;gt; getTasks() {
        return service.retrieveAll().stream()
                .map(TaskResponse::new)
                .collect(Collectors.toUnmodifiableList());
    }

    @PATCH
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public Response updateTask(@PathParam("taskID") String taskID, TaskUpdateRequest taskUpdateRequest) {
        validateArgNotNull(taskUpdateRequest, "task-update-request-body");

        TaskUpdate update = new TaskUpdate(taskUpdateRequest.getTitle(), taskUpdateRequest.getDescription(),
                taskUpdateRequest.isCompleted());

        service.update(taskID, update);

        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public TaskResponse getTask(@PathParam("taskID") String taskID) {
        Task task = service.retrieve(taskID);
        return new TaskResponse(task);
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {
        service.delete(taskID);
        return Response.noContent().build();
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/cda2ff4ec39cdcba46272ed4eb35890bcf8933ff"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5 - Implementing the Storage Mechanism
&lt;/h3&gt;

&lt;p&gt;For simplicity, we are going to implement an in-memory storage implementation of the repository interface rather than relying on a database technology. The implementation will store all tasks inside a map - the key is the task identifier and the value is the task itself. This is just enough for simple CRUD functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class InMemoryTaskManagementRepository implements TaskManagementRepository {

    Map&amp;lt;String, Task&amp;gt; tasks = new HashMap&amp;lt;&amp;gt;();

    @Override
    public void save(Task task) {
        tasks.put(task.getIdentifier(), task);
    }

    @Override
    public List&amp;lt;Task&amp;gt; getAll() {
        return tasks.values().stream()
                .collect(Collectors.toUnmodifiableList());
    }

    @Override
    public Optional&amp;lt;Task&amp;gt; get(String taskID) {
        return Optional.ofNullable(tasks.get(taskID));
    }

    @Override
    public void delete(String taskID) {
        tasks.remove(taskID);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/d491f55f36e00f8fead99a2c1ed9a46685147e3d"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6 - Binding Everything Together
&lt;/h3&gt;

&lt;p&gt;Now that we have all the layers implemented, we need to bind them together with a dependency injection framework - in this case, we will use Guice to achieve that.&lt;/p&gt;

&lt;p&gt;We start by adding Guice as a dependency in the POM file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;com.google.inject&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;guice&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;4.2.3&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we create a simple guice module to bind the in-memory DB implementation to the repository interface. This basically means that for all classes that depend on the repository interface, Guice will inject the in-memory DB class. We use the &lt;strong&gt;Singleton&lt;/strong&gt; scope because we want all classes that depend on the repository to re-use the same in-memory DB instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ApplicationModule extends AbstractModule {

    @Override
    public void configure() {
        bind(TaskManagementRepository.class).to(InMemoryTaskManagementRepository.class).in(Singleton.class);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that if we decide to use an actual database, the code change is as simple as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;implementing the wrapper class for the DB we choose - e.g. &lt;strong&gt;MongoDBTaskManagementRepository&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;changing the binding above to point to the new implementation of the repository interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have the module implemented, we can add &lt;strong&gt;Inject&lt;/strong&gt; annotation to all classes where the constructor has a dependency which needs to be injected by Guice. These would be the &lt;strong&gt;TaskManagementResource&lt;/strong&gt; and the &lt;strong&gt;TaskManagementService&lt;/strong&gt; classes. The magic of guice (and dependency injection in general) is that the module above is enough to build the entire tree of dependencies in our code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TaskManagementResource&lt;/strong&gt; depends on &lt;strong&gt;TaskManagementService&lt;/strong&gt; which depends on &lt;strong&gt;TaskManagementRepository&lt;/strong&gt;. Guice knows how to get an instance of the &lt;strong&gt;TaskManagementRepository&lt;/strong&gt; interface so following this chain it also knows how to get an instance of the &lt;strong&gt;TaskManagementService&lt;/strong&gt; and &lt;strong&gt;TaskManagementResource&lt;/strong&gt; classes.&lt;/p&gt;

&lt;p&gt;The final piece of work is to make Jersey aware of the Guice injector - remember Jersey uses HK2 as its dependency injection framework, so Jersey will rely on HK2 to be able to build a &lt;strong&gt;TaskManagementResource&lt;/strong&gt; class. In order for HK2 to build a &lt;strong&gt;TaskManagementResource&lt;/strong&gt; it needs to know about Guice's dependency injector container. To connect Guice and HK2, we are going to use something called the &lt;a href="https://javaee.github.io/hk2/guice-bridge.html"&gt;Guice/HK2 Bridge&lt;/a&gt;. It is basically a process of bridging the Guice container (the Injector class) into the HK2 container (the ServiceLocator class).&lt;/p&gt;

&lt;p&gt;So we declare a dependency on the Guice/HK2 bridge library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ...
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.glassfish.hk2&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;guice-bridge&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.6.1&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we change the &lt;strong&gt;ApplicationConfig&lt;/strong&gt; class to create the bridge between Guice and HK2. Notice that since the &lt;strong&gt;ApplicationConfig&lt;/strong&gt; class is used by Jersey (and thus managed by HK2) we can easily inject the &lt;strong&gt;ServiceLocator&lt;/strong&gt; instance (the HK2 container itself) into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        @Inject
        public ApplicationConfig(ServiceLocator serviceLocator) {
            register(TaskManagementResource.class);
            register(JsonObjectMapperProvider.class);

            // bridge the Guice container (Injector) into the HK2 container (ServiceLocator)
            Injector injector = Guice.createInjector(new ApplicationModule());
            GuiceBridge.getGuiceBridge().initializeGuiceBridge(serviceLocator);
            GuiceIntoHK2Bridge guiceBridge = serviceLocator.getService(GuiceIntoHK2Bridge.class);
            guiceBridge.bridgeGuiceInjector(injector);
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/fdb7e4bf8f6d15fd35e7cf8a746ac68141d59ed3"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7 - Creating the Application Launcher
&lt;/h3&gt;

&lt;p&gt;The final critical step is configuring and starting the application server through a launcher class, which will serve as our main class for the executable jar file we are targeting.&lt;/p&gt;

&lt;p&gt;We start with the code for starting an embedded Tomcat server. The dependency we need is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ...
    &amp;lt;dependency&amp;gt;
        &amp;lt;groupId&amp;gt;org.apache.tomcat.embed&amp;lt;/groupId&amp;gt;
        &amp;lt;artifactId&amp;gt;tomcat-embed-core&amp;lt;/artifactId&amp;gt;
        &amp;lt;version&amp;gt;9.0.62&amp;lt;/version&amp;gt;
    &amp;lt;/dependency&amp;gt;
    ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we need a launcher class. This class is responsible for starting the embedded Tomcat server and registering a servlet container for the resource config we defined earlier (when we registered the resource class).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Launcher {

    public static void main(String[] args) throws Exception {
        Tomcat tomcat = new Tomcat();

        // configure server port number
        tomcat.setPort(8080);

        // remove defaulted JSP configs
        tomcat.setAddDefaultWebXmlToWebapp(false);

        // add the web app
        StandardContext ctx = (StandardContext) tomcat.addWebapp("/", new File(".").getAbsolutePath());
        ResourceConfig resourceConfig = new ResourceConfig(ApplicationConfig.class);
        Tomcat.addServlet(ctx, "jersey-container-servlet", new ServletContainer(resourceConfig));
        ctx.addServletMappingDecoded("/*", "jersey-container-servlet");

        // start the server
        tomcat.start();
        System.out.println("Server listening on " + tomcat.getHost().getName() + ":" + tomcat.getConnector().getPort());
        tomcat.getServer().await();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If using InteliJ to code this project, then you should ideally be able to run the main method of the Launcher class. There is one caveat here - with the release of JDK 9 and after (and hence the introduction of the Java Platform Module System), reflective access is only allowed to publicly exported packages. This means that Guice will fail at runtime because it uses reflection to access JDK modules. See this StackOverflow &lt;a href="https://stackoverflow.com/questions/41265266/how-to-solve-inaccessibleobjectexception-unable-to-make-member-accessible-m"&gt;post&lt;/a&gt; for more information.&lt;/p&gt;

&lt;p&gt;The only workaround I found so far for this was to add the following as a JVM option &lt;code&gt;--add-opens java.base/java.lang=ALL-UNNAMED&lt;/code&gt; to the run configuration of the main method as suggested in the StackOverflow post I linked. This basically allows Guice to continue doing its reflection as in the pre-JDK 9 releases.&lt;/p&gt;

&lt;p&gt;After we use the workaround above and test our launcher, we get to the part of generating an executable JAR file which can be used to start the service. To achieve this, we need the appassembler plugin. Note that we still need to add the &lt;code&gt;--add-opens java.base/java.lang=ALL-UNNAMED&lt;/code&gt; JVM argument in order for the executable jar file to work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         ...
         &amp;lt;plugins&amp;gt;
            &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;org.codehaus.mojo&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;appassembler-maven-plugin&amp;lt;/artifactId&amp;gt;
                &amp;lt;version&amp;gt;2.0.0&amp;lt;/version&amp;gt;
                &amp;lt;configuration&amp;gt;
                    &amp;lt;assembleDirectory&amp;gt;target&amp;lt;/assembleDirectory&amp;gt;
                    &amp;lt;extraJvmArguments&amp;gt;--add-opens java.base/java.lang=ALL-UNNAMED&amp;lt;/extraJvmArguments&amp;gt;
                    &amp;lt;programs&amp;gt;
                        &amp;lt;program&amp;gt;
                            &amp;lt;mainClass&amp;gt;taskmanagement.Launcher&amp;lt;/mainClass&amp;gt;
                            &amp;lt;name&amp;gt;taskmanagement_webapp&amp;lt;/name&amp;gt;
                        &amp;lt;/program&amp;gt;
                    &amp;lt;/programs&amp;gt;
                &amp;lt;/configuration&amp;gt;
                &amp;lt;executions&amp;gt;
                    &amp;lt;execution&amp;gt;
                        &amp;lt;phase&amp;gt;package&amp;lt;/phase&amp;gt;
                        &amp;lt;goals&amp;gt;
                            &amp;lt;goal&amp;gt;assemble&amp;lt;/goal&amp;gt;
                        &amp;lt;/goals&amp;gt;
                    &amp;lt;/execution&amp;gt;
                &amp;lt;/executions&amp;gt;
            &amp;lt;/plugin&amp;gt;
        &amp;lt;/plugins&amp;gt;
        ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this plugin, we can finally generate an executable file and then use it to start the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mvn clean package
./target/bin/taskmanagement_webapp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/179c1ea722089eb62046fe18202908c1f152abc1"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8 - Adding Exception Mappers
&lt;/h3&gt;

&lt;p&gt;You might have noticed that so far we have defined two custom exceptions that are thrown when the service receives input data it cannot handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TaskNotFoundException&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;InvalidTaskDataException&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If these exceptions aren't handled properly when encountered, then the embedded tomcat server will wrap them inside an internal server error (status code 500) which is not very user friendly. As per the API specification we defined in the beginning (see Appendix), we want clients to receive a 404 status code if, for example, they use a task ID that doesn't exist.&lt;/p&gt;

&lt;p&gt;To achieve this, we use exception mappers. When we register those mappers, Jersey will use them to transform instances of these exceptions to proper HTTP &lt;strong&gt;Response&lt;/strong&gt; objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class TaskNotFoundExceptionMapper implements ExceptionMapper&amp;lt;TaskNotFoundException&amp;gt; {

    @Override
    public Response toResponse(TaskNotFoundException exception) {
        return Response
                .status(Response.Status.NOT_FOUND)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


public class InvalidTaskDataExceptionMapper implements ExceptionMapper&amp;lt;InvalidTaskDataException&amp;gt; {

    @Override
    public Response toResponse(InvalidTaskDataException exception) {
        return Response
                .status(Response.Status.BAD_REQUEST)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


    @Inject
    public ApplicationConfig(ServiceLocator serviceLocator) {
        ...
        register(InvalidTaskDataExceptionMapper.class);
        register(TaskNotFoundExceptionMapper.class);
        ...
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the use of a new POJO - &lt;strong&gt;ExceptionMessage&lt;/strong&gt; - which is used to convey the exception message as a JSON response. Now, whenever the business logic throws any of these exceptions, we will get a proper JSON response with the appropriate status code.&lt;/p&gt;

&lt;p&gt;The full commit for this step can be found &lt;a href="https://github.com/nikist97/TaskManagementService/commit/498169e38589013c8a9c6f5651f57a5df14e4c62"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerizing the Application
&lt;/h2&gt;

&lt;p&gt;There are lots of benefits of using &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; but given that this article is not about containers, I won't spend time talking about them. I will only mention that I always prefer to run applications in a Docker container because it makes the build process much more efficient (think application portability, well-defined build behavior, improved deployment process, etc.)&lt;/p&gt;

&lt;p&gt;The Dockerfile for our service is relatively simple and based on the maven OpenJDK image. It automates what we did in step 7 - packaging the application and running the executable jar file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM maven:3.8.5-openjdk-11-slim
WORKDIR /application

COPY . .

RUN mvn clean package

CMD ["./target/bin/taskmanagement_webapp"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, we can build the container image and start our service as a Docker container. The commands below assume you have the Docker daemon running on your local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --tag task-management-service .
docker run -d -p 127.0.0.1:8080:8080 --name test-task-management-service task-management-service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the service should be running in the background and be accessible from your local machine on port 8080. For starting/stopping it, use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker start/stop test-task-management-service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the Service
&lt;/h2&gt;

&lt;p&gt;Now that we have the service running, we can use &lt;a href="https://curl.se/"&gt;Curl&lt;/a&gt; to send some test requests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creating a few tasks
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks" 

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:46 GMT

curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks"

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:47 GMT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;retrieving a task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X GET "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 162
Date: Tue, 28 Jun 2022 07:54:21 GMT

{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;retrieving a non-existing task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X GET "http://localhost:8080/api/tasks/random-task-id-123"                                                       

HTTP/1.1 404 
Content-Type: application/json
Content-Length: 81
Date: Tue, 28 Jun 2022 09:44:53 GMT

{"message":"Task with the given identifier cannot be found - random-task-id-123"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;retrieving all tasks
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X GET "http://localhost:8080/api/tasks"     

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 490
Date: Tue, 28 Jun 2022 07:55:08 GMT

[{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false},{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:46.444179Z","completed":false}]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;deleting a task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X DELETE "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 204 
Date: Tue, 28 Jun 2022 07:56:55 GMT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;patching a task
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X PATCH -H "Content-Type:application/json" -d "{\"completed\": true, \"title\": \"new-title\", \"description\":\"new-description\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 200 
Content-Length: 0
Date: Tue, 28 Jun 2022 08:00:37 GMT

curl -i -X GET "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"   
HTTP/1.1 200 
Content-Type: application/json
Content-Length: 164
Date: Tue, 28 Jun 2022 08:01:07 GMT

{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"new-title","description":"new-description","createdAt":"2022-06-28T07:52:46.444179Z","completed":true}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;patching a task with empty title
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -i -X PATCH -H "Content-Type:application/json" -d "{\"title\": \"\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 400 
Content-Type: application/json
Content-Length: 43
Date: Tue, 28 Jun 2022 09:47:09 GMT
Connection: close

{"message":"title cannot be null or blank"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Future Improvements
&lt;/h2&gt;

&lt;p&gt;What we have built so far is obviously not a production-ready API, but it demonstrates how to get started with the software suite I mentioned in the beginning of this article when building a REST API. Here are some future improvements that can be made:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;using a database for persistent storage&lt;/li&gt;
&lt;li&gt;adding user authentication and authorization - tasks should be scoped per user rather than being available globally&lt;/li&gt;
&lt;li&gt;adding logging&lt;/li&gt;
&lt;li&gt;adding KPI (Key Performance Indicators) metrics - things like the count of total requests, latency, failures count, etc. &lt;/li&gt;
&lt;li&gt;adding a mapper for unexpected exceptions - we don't want to expose a stack trace if the service encounters an unexpected null pointer exception, instead we want a JSON response with status code 500&lt;/li&gt;
&lt;li&gt;adding automated integration tests&lt;/li&gt;
&lt;li&gt;adding a more verbose response to the patch endpoint - e.g. indicating whether the request resulted in a change or not&lt;/li&gt;
&lt;li&gt;scanning packages and automatically registering provider and resource classes instead of manually registering them one-by-one&lt;/li&gt;
&lt;li&gt;adding CORS (Cross-Origin-Resource-Sharing) support if we intend to call the API from a browser application hosted under a different domain&lt;/li&gt;
&lt;li&gt;adding SSL support&lt;/li&gt;
&lt;li&gt;adding rate limiting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you found this article helpful and would like to see a follow-up on the topics above, please comment or message me with a preference choice of what you would like to learn about the most.&lt;/p&gt;

&lt;h2&gt;
  
  
  Appendix
&lt;/h2&gt;

&lt;p&gt;The full API specification using the Open API description format can be found below. You can use the &lt;a href="https://editor.swagger.io/"&gt;Swagger Editor&lt;/a&gt; to display the API specification in a more friendly manner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;swagger: '2.0'

info:
  description: This is a RESTful task management API specification.
  version: 1.0.0
  title: Task Management API
  license:
    name: Apache 2.0
    url: 'http://www.apache.org/licenses/LICENSE-2.0.html'

host: 'localhost:8080'
basePath: /api

schemes:
  - http

paths:

  /tasks:
    post:
      summary: Create a new task
      operationId: createTask
      consumes:
        - application/json
      parameters:
        - in: body
          name: taskCreateRequest
          description: new task object that needs to be added to the list of tasks
          required: true
          schema:
            $ref: '#/definitions/TaskCreateRequest'
      responses:
        '201':
          description: successfully created new task
        '400':
          description: task create request failed validation
    get:
      summary: Retrieve all existing tasks
      operationId: retrieveTasks
      produces:
        - application/json
      responses:
        '200':
          description: successfully retrieved all tasks
          schema:
            type: array
            items:
              $ref: '#/definitions/TaskResponse'

  '/tasks/{taskID}':
    get:
      summary: Retrieve task
      operationId: retrieveTask
      produces:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '200':
          description: successfully retrieved task
          schema:
            $ref: '#/definitions/TaskResponse'
        '404':
          description: task not found
    patch:
      summary: Update task
      operationId: updateTask
      consumes:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
        - name: taskUpdateRequest
          in: body
          description: task update request
          required: true
          schema:
            $ref: '#/definitions/TaskUpdateRequest'
      responses:
        '200':
          description: successfully updated task
        '400':
          description: task update request failed validation
        '404':
          description: task not found
    delete:
      summary: Delete task
      operationId: deleteTask
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '204':
          description: &amp;gt;-
            successfully deleted task or task with the given identifier did not
            exist

definitions:
  TaskCreateRequest:
    type: object
    required:
      - title
      - description
    properties:
      title:
        type: string
      description:
        type: string
  TaskUpdateRequest:
    type: object
    properties:
      title:
        type: string
      description:
        type: string
      completed:
        type: boolean
  TaskResponse:
    type: object
    required:
      - identifier
      - title
      - description
      - completed
      - createdAt
    properties:
      identifier:
        type: string
      title:
        type: string
      description:
        type: string
      createdAt:
        type: string
        format: date-time
      completed:
        type: boolean

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>A Guide for Common SSH Use Cases</title>
      <dc:creator>Nikolay Stanchev</dc:creator>
      <pubDate>Mon, 13 Jun 2022 10:30:33 +0000</pubDate>
      <link>https://dev.to/nikolay_stanchev/a-guide-for-common-ssh-use-cases-5k</link>
      <guid>https://dev.to/nikolay_stanchev/a-guide-for-common-ssh-use-cases-5k</guid>
      <description>&lt;h3&gt;
  
  
  What is SSH
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Secure_Shell"&gt;SSH&lt;/a&gt; is a network protocol allowing two machines (physical or virtual) to establish an encrypted connection and comminucate with each other over an unsecure network without compromising the confidentiality and integrity of the exchanged messages. The protocol uses public-key (a.k.a. asymmetric) cryptography to authenticate an SSH client that wants to connect to an SSH server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.openssh.com/"&gt;OpenSSH&lt;/a&gt; is an implementation of the SSH protocol. All the usage examples given in the next sections are based on it.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSH Basic Usage
&lt;/h3&gt;

&lt;p&gt;A very common usage of SSH is to login to a remote server. Assuming you have a private/public key pair on your local machine and the public key is also distributed on the server you are trying to login to, then connecting is as simple as running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh &amp;lt;server domain name&amp;gt;
------------------------------
ssh myserver.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This assumes two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you have an SSH agent running with the appropriate private key added or the path to the private key follows a standard naming convention (e.g. &lt;code&gt;~/.ssh/id_rsa&lt;/code&gt;) - you might need to run &lt;code&gt;ssh-add &amp;lt;path to private key&amp;gt;&lt;/code&gt; if this is not the case

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;~/.ssh/id_rsa&lt;/code&gt; is one of the standard naming conventions that the SSH client automatically attempts to use when authenticating with a server&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;you want to authenticate with the username on your local machine

&lt;ul&gt;
&lt;li&gt;this assumption will not hold if on my laptop I use something like &lt;strong&gt;nikolay&lt;/strong&gt; as the username but on a server running Ubuntu only the &lt;strong&gt;ubuntu&lt;/strong&gt; user exists&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following command can be used to change the username to authenticate with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh &amp;lt;username&amp;gt;@&amp;lt;server domain name&amp;gt;
---------------------------------------------
ssh ubuntu@myserver.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Advanced SSH Usage - Tunneling
&lt;/h3&gt;

&lt;p&gt;A more advanced usage of SSH is the so called SSH tunneling (a.k.a. SSH port forwarding). SSH tunneling is a process of establishing communication between a client (in this case, your local machine) and a server (a remote machine) through a jump server (the machine running an SSH server). This is achieved by mapping ports between the client and the SSH server (hence the name "port forwarding").&lt;/p&gt;

&lt;p&gt;This can be visualized with the following diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m7h2zxnT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655104308513/mBLXSsNwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m7h2zxnT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655104308513/mBLXSsNwb.png" alt="Screenshot 2022-06-13 at 10.10.58.png" width="880" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep in mind that the remote machine might be the jump server itself.&lt;/p&gt;

&lt;p&gt;There are 3 types of port forwarding, the behavior of which will be illustrated in the following sections. All the example commands assume that the client's public key is already distributed to the SSH server and that the client can successfully login to the SSH server using the command given in the previous section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;N.B.&lt;/strong&gt; remember that if using a different username on your local machine and on the SSH server, then you need to include &lt;code&gt;&amp;lt;SSH server username&amp;gt;@&lt;/code&gt; in front of all the "jump server" references for the examples below to work - for example, instead of &lt;code&gt;mysshserver.com&lt;/code&gt; you might have to use something like &lt;code&gt;user@mysshserver.com&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Local Port Forwarding
&lt;/h4&gt;

&lt;p&gt;Local port forwarding is the process of tunneling connections initiated by the client machine through a jump server and forwarding these connections to a remote server or the jump server itself.&lt;/p&gt;

&lt;p&gt;To initiate this, run the command below on your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -N -f -L &amp;lt;local port&amp;gt;:&amp;lt;target server domain name&amp;gt;:&amp;lt;remote port&amp;gt; &amp;lt;jump server domain name&amp;gt;
------------------------------------------------------------------------------------------------------
ssh -N -f -L 5001:mytargetapp.com:5002 mysshserver.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the example above, all connections to port 5001 on your local machine will be tunneled (in an encrypted manner) to the SSH jump server (&lt;code&gt;mysshserver.com&lt;/code&gt;) and then forwarded to &lt;code&gt;mytargetapp.com&lt;/code&gt; at port 5002.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J-k_37e---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655110178669/5RlSrH_K4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J-k_37e---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655110178669/5RlSrH_K4.png" alt="Screenshot 2022-06-13 at 11.35.09.png" width="880" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CLI flags are explained below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;-L&lt;/strong&gt; - this is the key flag that instructs the SSH client to execute local port forwarding &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-N&lt;/strong&gt; - by default the SSH client will open a session even when executing local port forwarding; this flag instructs the client to not execute a remote command (i.e. only forward ports)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-f&lt;/strong&gt; - this instructs the ssh client to run in the background (so that you don't need to keep your terminal open while port forwarding is running)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another example where we are not forwarding to a remote machine but to the jump server itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -N -f -L 5001:localhost:5002 mysshserver.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, any connections to port 5001 on your local machine are tunneled to port 5002 on the jump server itself (&lt;code&gt;mysshserver.com&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Od36__4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655110212011/PBk-1AGkF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Od36__4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655110212011/PBk-1AGkF.png" alt="Screenshot 2022-06-13 at 11.47.18.png" width="880" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Remote Port Forwarding
&lt;/h4&gt;

&lt;p&gt;Remote port forwarding is the reverse process of local port forwarding - forwarding connections initiated by a remote machine (or the jump server itself) through the jump server and tunneling these connections back to your local machine on the specified target port.&lt;/p&gt;

&lt;p&gt;To initiate this, run the command below on your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -N -f -R &amp;lt;remote port&amp;gt;:localhost:&amp;lt;local port&amp;gt; &amp;lt;jump server domain name&amp;gt;
------------------------------------------------------------------------------------------------------
ssh -N -f -R 5001:localhost:5002 mysshserver.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example above shows a scenario where port 5001 will be open for connections on the jump server (&lt;code&gt;mysshserver.com&lt;/code&gt;), any connection to it will be tunneled back to your local machine (in an encrypted manner), and then forwarded to the application listening on the target port - 5002.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HF-9DFEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655110279483/NP2EnlWqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HF-9DFEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655110279483/NP2EnlWqb.png" alt="Screenshot 2022-06-13 at 11.04.30.png" width="880" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only new CLI flag in this case is &lt;strong&gt;-R&lt;/strong&gt; which instructs the SSH client to execute remote port forwarding. Remote port forwarding is a very common solution if you want to temporarily expose an application running on your local machine to the public (assuming the SSH server is publicly available on the internet) or to someone that has network access to the SSH server in general (for example, through a VPN).&lt;/p&gt;

&lt;h4&gt;
  
  
  Dynamic Port Forwarding
&lt;/h4&gt;

&lt;p&gt;Dynamic port forwarding is similar to local port forwarding in the sense that local connections are tunneled (in an encrypted manner) to the SSH jump server. The difference is that with dynamic port forwarding you don't need to specify a target host or port - all connections are forwarded to the original target host and target port through the SSH server. In technical terms, dynamic port forwarding is the process of using the SSH server as a &lt;a href="https://en.wikipedia.org/wiki/SOCKS"&gt;SOCKS5&lt;/a&gt; proxy server.&lt;/p&gt;

&lt;p&gt;To initiate this run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -N -f -D &amp;lt;local proxy port&amp;gt; &amp;lt;jump server domain name&amp;gt;
------------------------------------------------------------------------------------------------------
ssh -N -f -D 1080 mysshserver.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new CLI flag in this case is &lt;strong&gt;-D&lt;/strong&gt; which instructs the SSH client to execute dynamic port forwarding. The recommendation is to use port 1080 as it is the &lt;a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml"&gt;standard&lt;/a&gt; SOCKS protocol port number.&lt;/p&gt;

&lt;p&gt;Then you can configure your browser (say Firefox - see this &lt;a href="https://support.mozilla.org/en-US/kb/connection-settings-firefox"&gt;tutorial&lt;/a&gt;) or any other application that supports SOCKS/SOCKS5 proxies to use this tunnel and securely forward your network traffic through the SSH jump server (&lt;code&gt;mysshserver.com&lt;/code&gt;). In essense, this gives you a very basic VPN functionality by hiding data about your local machine - websites you visit will see traffic that seems to be coming from the SSH server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tq09GRkn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655112378928/gpbMum3q3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tq09GRkn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1655112378928/gpbMum3q3.png" alt="Screenshot 2022-06-13 at 12.21.48.png" width="880" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above illustrates an example where the client is accessing &lt;code&gt;somewebsite.com&lt;/code&gt; through dynamic port forwarding. The browser uses the proxy to connect to the website. The communciation between the local machine and the SSH server (&lt;code&gt;mysshserver.com&lt;/code&gt;) is encrypted. The SSH server then forwards the request to the actual website. From the perspective of &lt;code&gt;somewebsite.com&lt;/code&gt; the conenction was initiated from the SSH server rather than the local machine.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>6 Things You Can Easily Do With Nginx</title>
      <dc:creator>Nikolay Stanchev</dc:creator>
      <pubDate>Fri, 10 Jun 2022 15:22:19 +0000</pubDate>
      <link>https://dev.to/nikolay_stanchev/6-things-you-can-easily-do-with-nginx-5dc2</link>
      <guid>https://dev.to/nikolay_stanchev/6-things-you-can-easily-do-with-nginx-5dc2</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; is by far one of the most famous and most commonly deployed web servers out in the world - check the &lt;a href="https://w3techs.com/technologies/overview/web_server"&gt;web servers' usage statistics&lt;/a&gt; provided by W3Techs. Nginx can also act as a proxy server, load balancer, caching server, etc. - feel free to read the &lt;a href="https://www.nginx.com/resources/glossary/nginx/"&gt;official docs&lt;/a&gt; for more information on what the product can offer you.&lt;/p&gt;

&lt;p&gt;My first experience with Nginx was when trying to use it as an API gateway (more like a single entry point for a system) proxying requests to multiple back-end services. I fell in love with the simplicity of a product with so many capabilities. Since then, I've used it for multiple projects and would still prefer it in comparison to alternative web servers.&lt;/p&gt;

&lt;p&gt;I decided to write this post to illustrate some of the features I found super useful during my experience with Nginx which in my opinion turned out to be incredibly simple to configure. In no particular order, the list is given in the next section.&lt;/p&gt;

&lt;p&gt;If you don't understand the http/server/location directives in the configuration examples below or you haven't seen any Nginx configuration before, please refer to this &lt;a href="https://docs.nginx.com/nginx/admin-guide/web-server/web-server/"&gt;guide&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h3&gt;
  
  
  How-tos
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Limit request body size
&lt;/h4&gt;

&lt;p&gt;Very often when developing an API we let users upload arbitrary files - be they image files, video files, archives, etc. This would commonly imply sending a large amount of bytes as part of the HTTP request body. When left with no validation this functionality can lead to unexpected storage costs or even memory exhaustion if the backend server loads the whole file into memory. Nginx can easily provide protection for this by enforcing a limit on the size of the request body. Here is an example configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    client_max_body_size 500M;
    ... 
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, it is that simple. This line is enough to ensure the body of all HTTP requests sent to this host does not exceed 500 megabytes. Any requests that exceed the limit will receive a client error response with status code 413 (Request Entity Too Large).&lt;/p&gt;

&lt;p&gt;Another example is given below - enforcing the limit for a particular URI pattern rather than for every single request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    server {
        ...
        listen 80;

        location /api/pictures {
            ...
            client_max_body_size 500M;
            ...
        }
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The config above defines an HTTP server listening on port 80 and enforces all requests sent to URIs starting with &lt;code&gt;/api/pictures&lt;/code&gt; to have a maximum body size of 500 megabytes. For more details on this, please see the &lt;a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size"&gt;official docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;N.B. the default threshold is 1 megabyte&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Add HTTP basic authentication
&lt;/h4&gt;

&lt;p&gt;A common use case website developers have is restricting access to certain sub-pages of their website. Personally, what I also commonly do during manual testing of a new website/API is restrict access to the entire application to a particular set of test users until the release is official. Nginx's &lt;a href="https://nginx.org/en/docs/http/ngx_http_auth_basic_module.html"&gt;auth basic module&lt;/a&gt; can easily allow us to do just that. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    auth_basic "Please authenticate";
    auth_basic_user_file /etc/nginx/.htpasswd;
    ... 
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This config restricts the entire Nginx server using &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication"&gt;HTTP Basic Authentication&lt;/a&gt;. Keep in mind the &lt;code&gt;/etc/nginx/.htpasswd&lt;/code&gt; file must contain the list of &lt;code&gt;&amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;&lt;/code&gt; pairs and is expected to follow a certain format. Please see this &lt;a href="https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/"&gt;tutorial&lt;/a&gt; for how to generate the necessary password file.&lt;/p&gt;

&lt;p&gt;The same configuration can be used to only restrict a particular URI pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    server {
        ...
        listen 80;

        location /admin {
            ...
            auth_basic "Please authenticate";
            auth_basic_user_file /etc/nginx/.htpasswd;
            ...
        }
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Limit API access to read-only requests
&lt;/h4&gt;

&lt;p&gt;Sometimes when exposing a local back-end service to the public through Nginx (acting as a proxy) I only want to allow read-only (e.g. GET and HEAD) requests that the front-end needs for retrieving content - for example, I might not want to expose publicly any ops-related DELETE API endpoints. Nginx supports this through the &lt;a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_except"&gt;limit_except&lt;/a&gt; directive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    server {
        ...
        listen 80;

        location /api {
            ...
            limit_except GET {
                deny all;
            }
            ...
        }
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;N.B. the HEAD method is automatically included when allowing the GET method&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Add SSL support
&lt;/h4&gt;

&lt;p&gt;Who doesn't love the message from our browsers indicating that the connection with a particular website is secure? To enable this for an Nginx web server we can use the &lt;a href="https://nginx.org/en/docs/http/ngx_http_ssl_module.html"&gt;Nginx SSL module&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    server {
        ...
        listen 443 ssl;

        ssl_certificate &amp;lt;path to certificate file in PEM format&amp;gt;;
        ssl_certificate_key &amp;lt;path to private/secret key file in PEM format&amp;gt;;
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This config assumes you know how to generate an SSL certificate. One of the simplest ways to do this is to use &lt;strong&gt;Let's encrypt&lt;/strong&gt; - follow this &lt;a href="https://letsencrypt.org/getting-started/"&gt;tutorial&lt;/a&gt;. Ultimately, you should end up with &lt;strong&gt;fullchain.pem&lt;/strong&gt; and &lt;strong&gt;privkey.pem&lt;/strong&gt; ; these are the files you need for the &lt;strong&gt;ssl_certiciate&lt;/strong&gt; and &lt;strong&gt;ssl_certificate_key&lt;/strong&gt; directives respectively.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rate limiting clients
&lt;/h4&gt;

&lt;p&gt;Rate limiting is the process of ensuring that an individual client cannot exceed a given threshold of requests for a given unit of time - e.g. ensuring that each client can send a maximum of 5 requests per second. The module for configuring this is the &lt;a href="https://nginx.org/en/docs/http/ngx_http_limit_req_module.html"&gt;Nginx HTTP Limit Request Module&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    limit_req_zone $binary_remote_addr zone=books:10m rate=5r/s;
    ...
    server {
        ...
        listen 80;

        location /api/books {
            ...
            limit_req zone=books;
            ...
        }
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we need one directive ( &lt;strong&gt;limit_req_zone&lt;/strong&gt; ) for defining the rate limiting "zone" (a re-usable rate limiting configuration) and another directive ( &lt;strong&gt;limit_req&lt;/strong&gt; ) for indicating that we want to enable the rate limiting zone for a particular context (e.g. a URI location pattern). The &lt;strong&gt;limit_req&lt;/strong&gt; directive can also be used inside a server or http context - in other words, rate limiting can be applied for all virtual servers, for an individual virtual server or for a particular URI location pattern.&lt;/p&gt;

&lt;p&gt;By default, clients that exceed the rate limiting threshold will receive a status code 503 Service Unavailable. This is also configurable with the &lt;strong&gt;limit_req_status&lt;/strong&gt; directive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    limit_req_zone $binary_remote_addr zone=books:10m rate=5r/s;
    ...
    server {
        ...
        listen 80;

        location /api/books {
            ...
            limit_req zone=books;
            limit_req_status 429;
            ...
        }
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Redirect HTTP traffic to HTTPs
&lt;/h4&gt;

&lt;p&gt;A very very very common scenario for web servers is to automatically redirect all clients who connect to the non-encrypted version of a website (HTTP traffic - i.e. server listening on port 80) to the safer encrypted version of the website (HTTPs traffic - i.e. server listening on port 443). Redirection is another easy thing to configure in Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    ...
    server {
        listen 80;
        return 301 https://$host$request_uri;
    }
    server {
        ...
        listen 443;
        ...
    }
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this config means is that Nginx will return a status code 301 (Moved Permanently) to all clients that connect to port 80. The redirection target will be a URL that uses the same host name and request URI but with &lt;strong&gt;https&lt;/strong&gt; rather than &lt;strong&gt;http&lt;/strong&gt; as the URL scheme. Browsers will automatically follow the redirection and move clients to the secure version of our website.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
