<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Long Ngo</title>
    <description>The latest articles on DEV Community by Long Ngo (@longngo0924).</description>
    <link>https://dev.to/longngo0924</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/longngo0924"/>
    <language>en</language>
    <item>
      <title>API Gateway integrate privately with ECS microservice</title>
      <dc:creator>Long Ngo</dc:creator>
      <pubDate>Sun, 05 Nov 2023 15:18:14 +0000</pubDate>
      <link>https://dev.to/longngo0924/api-gateway-integrate-privately-with-ecs-microservice-mnp</link>
      <guid>https://dev.to/longngo0924/api-gateway-integrate-privately-with-ecs-microservice-mnp</guid>
      <description>&lt;p&gt;This post noted some steps for configuring AWS API Gateway work with ECS to work with microservices. Some AWS services are used like API Gateway, ELB, ECS, VPC.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nsRTo2OM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyk4ihaaynuut89bqtaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nsRTo2OM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyk4ihaaynuut89bqtaw.png" alt="Image description" width="800" height="276"&gt;&lt;/a&gt;&lt;br&gt;
In summary, the steps contain of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create VPC.&lt;/li&gt;
&lt;li&gt;Create ECS service target group and attach it with ALB. &lt;/li&gt;
&lt;li&gt;Create ECS cluster and run services.&lt;/li&gt;
&lt;li&gt;Create ALB target group and attach to NLB.&lt;/li&gt;
&lt;li&gt;Create API gateway for REST API, and point the gateway to NLB using VPC Link.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Create VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a VPC with 2 public and 2 private subnets, 1 internet gateway and 1 NAT gateway&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s2SkfEiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jchir782giob409vr8s4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s2SkfEiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jchir782giob409vr8s4.png" alt="Image description" width="800" height="308"&gt;&lt;/a&gt;&lt;br&gt;
Check the route table of both private subnets to make sure an instance of the private subnet can go to the internet through the NAT gateway. Correctly, route table will have a route for destination is 0.0.0.0/0.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--skH35KEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd4qtc05zu9ismj3f2em.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--skH35KEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd4qtc05zu9ismj3f2em.png" alt="Image description" width="800" height="308"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;2. Create ECS service target group and attach it with ALB&lt;/strong&gt;&lt;br&gt;
Create a Target Group with IP addresses target type, place the group to created VPC. Other options are leave by default. One target group correspond with one microservice, you need to create many target group and configure each target group with a rule on listener of ALB.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--loTxgWFq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhzdrbcz0b1gypwur2ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--loTxgWFq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhzdrbcz0b1gypwur2ow.png" alt="Image description" width="800" height="220"&gt;&lt;/a&gt;&lt;br&gt;
Next, create an ALB with internal scheme, point the listener on port 80 to the created target group. Make sure the ALB nodes are placed at 2 private subnets, and using the default security group.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---xZVSC1C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15yqw3buduv5ccdkyz46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---xZVSC1C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15yqw3buduv5ccdkyz46.png" alt="Image description" width="800" height="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;3. Create ECS cluster and run services&lt;/strong&gt;&lt;br&gt;
Create an ECS cluster with Fargate launch type. Next, create a task definition which has one container for demo purpose, using the Nginx docker image. In real world, this docker image will be replicated by a microservice docker image.&lt;br&gt;
The task definition has Fargate launch type, Linux/X86_64 OS, 0.5 CPU and 1 GB RAM. Task also has some specific configurations, follow the below images. Others are to leave by default.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vc9pesIm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnvmzfyk7ghnjitwqiby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vc9pesIm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnvmzfyk7ghnjitwqiby.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
After creating task definition, we create a service for cluster. At environment, choose Launch type for Compute option. At deployment configuration, choose Service for Application type. Then open the net work configuration, choose created VPC and exclude 2 public subnets, only use 2 private subnets. Security group is default.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dBSvL46P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5my3rb6m55stj038jwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dBSvL46P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5my3rb6m55stj038jwq.png" alt="Image description" width="800" height="591"&gt;&lt;/a&gt;&lt;br&gt;
Next, open the Load Balancing and attach this service with created target group and ALB.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lEQfeJvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajq3opj617nfs7zujdmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lEQfeJvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajq3opj617nfs7zujdmi.png" alt="Image description" width="800" height="616"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NxdOv5nZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amxwmgyirwiyh1xdoc8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NxdOv5nZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amxwmgyirwiyh1xdoc8n.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;br&gt;
After creating service, it takes some time for status will be changed to Active. Then, checking the target group associate with ALB, we will see one healthy target.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7OMfNyqk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltygzvad4839q7p7vqr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7OMfNyqk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltygzvad4839q7p7vqr4.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;4. Create ALB target group and attach to NLB&lt;/strong&gt;&lt;br&gt;
We create another target group with target type is ALB, place it to created VPC. Then we register the created ALB to target group. Next, we create NLB and associate NLB to the created ALB target group.&lt;br&gt;
For network configuration, we also use created VPC and 2 private subnets.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kgll6R2K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7m19jreiafjsbx5xjplp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kgll6R2K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7m19jreiafjsbx5xjplp.png" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;br&gt;
Checking the ALB target group, we have one healthy target.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2ujHOwB---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci4w6lpj7vo6a5lxn4ym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ujHOwB---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci4w6lpj7vo6a5lxn4ym.png" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;5. Create API gateway for REST API, and point the gateway to NLB using VPC Link&lt;/strong&gt;&lt;br&gt;
Move to the API Gateway and create a Rest API, choose new API and REST protocol. Then go to the VPC and create a new one, point it to created NLB.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ucc188x9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/04uvxafx5g3k77a0eh6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ucc188x9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/04uvxafx5g3k77a0eh6z.png" alt="Image description" width="800" height="206"&gt;&lt;/a&gt;&lt;br&gt;
We need to wait for the status change to Available. At this time, we go to the API and create a proxy resource with GET method and setup request integration. In the real world, you need to add the {proxy+} to your endpoint URL to match with context path of API endpoint. Proxy resource help ALB distribute request to correct microservice host on ECS cluster.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KGE336aK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivzotpow9w7a7bpcmlcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KGE336aK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivzotpow9w7a7bpcmlcp.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;br&gt;
Then we deploy gateway to a stage, on this stage we configure some variable.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TeTPkKBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6hkzmma3ghw8j5ktswar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TeTPkKBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6hkzmma3ghw8j5ktswar.png" alt="Image description" width="800" height="296"&gt;&lt;/a&gt;&lt;br&gt;
If the VPC Link status changed to Available, we go back to the NLB and update the security group to allow request from API gateway to NLB.&lt;br&gt;
Creating a new security group and add one inbound rule&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bL5P9Z7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w452s7ulx9gkp21c3z1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bL5P9Z7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w452s7ulx9gkp21c3z1h.png" alt="Image description" width="800" height="150"&gt;&lt;/a&gt;&lt;br&gt;
At NLB security configuration, attach the created security group to it and uncheck the Enforce inbound rule on PrivateLinks traffic.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6w1OjXsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yffnakqo3r41tr76b8jh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6w1OjXsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yffnakqo3r41tr76b8jh.png" alt="Image description" width="800" height="440"&gt;&lt;/a&gt;&lt;br&gt;
Finally, go to the API Gateway and take the link of the stage. Open this link on a new browser tab and add /nginx path to the link, we will see the ECS service response.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gfmllAhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apji0bxzvaeyx4bb2131.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gfmllAhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apji0bxzvaeyx4bb2131.png" alt="Image description" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Lazy load caching strategy example using Redis</title>
      <dc:creator>Long Ngo</dc:creator>
      <pubDate>Sat, 01 Apr 2023 11:03:41 +0000</pubDate>
      <link>https://dev.to/longngo0924/aws-elasticache-lazy-load-caching-strategy-example-using-redis-4a8p</link>
      <guid>https://dev.to/longngo0924/aws-elasticache-lazy-load-caching-strategy-example-using-redis-4a8p</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Caching data is a very interested topic along with performance of system. There are very strategies to implement caching that you can hear at somewhere: write-through, write-back, read-through, cache-aside,... Today, we will create the simplest example for lazy-load also known as a cache-aside strategy by Spring Boot, MongoDB and Redis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis&lt;/strong&gt; is an open-source, in-memory data structure store, often used as a database, cache, and message broker. Redis is regularly used in web applications to cache frequently accessed data and reduce the number of database queries, thereby improving performance.&lt;br&gt;
&lt;strong&gt;MongoDB&lt;/strong&gt; is a popular, open-source NoSQL document-oriented database that is designed to store and manage unstructured data. Instead of using tables and rows like traditional relational databases, MongoDB stores data as JSON-like documents with dynamic schemas, which means that each document can have its own unique structure and fields.&lt;/p&gt;

&lt;p&gt;In this example, we not use self-managed for Redis. Instead of, we will use &lt;strong&gt;AWS ElastiCache Redis&lt;/strong&gt;. This is a service that AWS provide that can help us reduce much time to set up Redis server on local machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;2.1. Create AWS resources&lt;/strong&gt;&lt;br&gt;
We need to create Redis cluster. You can refer this below link for step-by-step guide: &lt;a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/GettingStarted.CreateCluster.html#Clusters.Create.CON.Redis-gs" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/GettingStarted.CreateCluster.html#Clusters.Create.CON.Redis-gs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating resource, we have one Redis cluster. Information of resources like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypbn47yxobjr3qqzkd8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypbn47yxobjr3qqzkd8e.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;2.2. Create Spring Boot project&lt;/strong&gt;&lt;br&gt;
In this demo, we will create APIs to interact with a local MongoDB instance and Redis remote server hosted in AWS. You can use the &lt;code&gt;Spring Start Project&lt;/code&gt; in Eclipse or &lt;code&gt;Spring Initializer&lt;/code&gt; to create a Spring Boot project. These are required maven dependencies that you must add to POM file in our example.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-boot-starter-data-mongodb&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-boot-starter-data-redis&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.fasterxml.jackson.core&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;jackson-databind&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2.3. Create API to interact with MongoDB &amp;amp; Redis&lt;/strong&gt;&lt;br&gt;
To work with AWS Elasticache Redis server, we need to add some configurations to property file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

spring.data.redis.host=&amp;lt;redis-primary-endpoint&amp;gt;
spring.data.redis.port=&amp;lt;redis-port&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You must replace the placeholder by your Redis primary endpoint and port, default port normally is 6379. After adding configurations, we create a Bean to interact with Redis server by Redis template.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

@Bean
public LettuceConnectionFactory redisConnectionFactory() {
  return new LettuceConnectionFactory();
}

@Bean
RedisTemplate &amp;lt; String, Object &amp;gt; redisTemplate(RedisConnectionFactory redisConnectionFactory) {

  RedisTemplate &amp;lt; String, Object &amp;gt; template = new RedisTemplate &amp;lt; &amp;gt; ();
  template.setConnectionFactory(redisConnectionFactory);
  return template;
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, creating an entity named is Product and this entity has some attributes like this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

@Document("product")
@Data
public class Product {

    private String id;
    private String name;
    private String category;
    private int quantity;
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And, we create a data access object that directly work with database and cache from application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

// function work with MongoDB

    public Product addProduct(Product p) {
        return mongoTemplate.save(p);
    }

    public Product getProductById(String id) {
        Class&amp;lt;?&amp;gt; productClass = Product.class;
        Product product = (Product) mongoTemplate.findById(id, productClass);
        log.info("Got product from Database {}", product);
        return product;
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

// function work with Redis

public ValueOperations &amp;lt; String, Object &amp;gt; addProductToRedis(Product p) {
  try {
    ValueOperations &amp;lt; String, Object &amp;gt; cachedProduct = redisTemplate.opsForValue();
    objectMapper = getObjectMapper();
    Map &amp;lt; ? , ? &amp;gt; map = objectMapper.convertValue(p, Map.class);
    cachedProduct.set(p.getId(), map, 10, TimeUnit.SECONDS);
    log.info("Add product to cache {}", map);
    return cachedProduct;
  } catch (RedisConnectionFailureException e) {
    log.info("Cannot add product, Redis service go down...");
  }

  return null;

}

public Product getProductFromRedis(String key) {

  ValueOperations &amp;lt; String, Object &amp;gt; cachedProduct = redisTemplate.opsForValue();
  try {
    Object result = cachedProduct.getAndExpire(key, 10, TimeUnit.SECONDS);
    objectMapper = getObjectMapper();
    Product product = objectMapper.convertValue(result, Product.class);
    log.info("Got product from cache {}", product);
    return product;
  } catch (RedisConnectionFailureException e) {
    log.info("Cannot get product, Redis service go down...");
  }

  return null;
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, creating a service to invoke functions in DAO layer&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

public Product addProduct(Product p) {
  return productDao.addProduct(p);
}

public Product getProductById(String id) {
  Product product = productDao.getProductFromRedis(id);
  if (product == null) {
    product = productDao.getProductById(id);
    productDao.addProductToRedis(product);

  }

  return product;
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;getProductById&lt;/code&gt; function has logic that follow with the strategy like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscco9enn7vgh9fuxqt6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscco9enn7vgh9fuxqt6v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When Spring App needs to read data from MongoDB, it checks the cache in Redis first to determine whether the data is available.&lt;/li&gt;
&lt;li&gt;If the data is available (a cache hit), the cached data is returned, and the response is issued to the caller. &lt;/li&gt;
&lt;li&gt;If the data isn’t available (a cache miss), the database is queried for the data. The cache is then populated with the data that is retrieved from the database, and the data is returned to the caller.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Moreover, because we don't want to store data that not be frequently access, store infrequently accessed data is waste. So, we will add a &lt;code&gt;Time To Live (TTL)&lt;/code&gt; for each data which be initially cached by 10 seconds. And for every time this data be accessed, we refresh the TTL value. In the above code, you can see we already used &lt;code&gt;getAndExpire&lt;/code&gt; method to accomplish this idea.&lt;/p&gt;

&lt;p&gt;Finally, we create some endpoints to test our functions&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

@PostMapping("/products")
public Product addProduct(@RequestBody Product p) {
  return productService.addProduct(p);
}

@GetMapping("/products/{id}")
public Product getProduct(@PathVariable String id) {
  return productService.getProductById(id);
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2.4. Testing API&lt;/strong&gt;&lt;br&gt;
For testing purpose, we use Postman to create a product&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5h4r0o0z7u82uisp6cv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5h4r0o0z7u82uisp6cv2.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Then, getting the product by ID and view the log in Eclipse console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs04p3c3p1o21tqkd5cqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs04p3c3p1o21tqkd5cqj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyw64hixf8gnafj7004t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyw64hixf8gnafj7004t.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Because this is the first time to get product, you can see Redis return &lt;code&gt;null&lt;/code&gt; value and database was being queried to get product. Afterward, product is added to Redis. We also see the time that API respond is &lt;strong&gt;731ms&lt;/strong&gt;. Now, we test to see how much time to take response with Redis&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujehmqs7vr5ia9zh6we3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujehmqs7vr5ia9zh6we3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hf2i8re6esrrme23e6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hf2i8re6esrrme23e6h.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Only &lt;strong&gt;8ms&lt;/strong&gt; and by the log we can confirm product was got from cache instead of database.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Summary
&lt;/h2&gt;

&lt;p&gt;We already implemented an example for Lazy-loading also known as Cache-Aside strategy caching using AWS Elasticache Redis. We can see the performance was significantly increased by caching. But it just is the simplest example to give a viewing for implement caching approach. To use it in the real world, for a large system, we must handle many situations that can occur like Redis goes down, memory fragmentation issues, TTL management,...&lt;/p&gt;

&lt;p&gt;The implementation of all these examples can be found in my &lt;a href="https://github.com/longngo0924/AWSLab/tree/master/Lab_Elasticache_Redis" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;References: &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/database-caching-strategies-using-redis/caching-patterns.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/database-caching-strategies-using-redis/caching-patterns.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Coding :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Upload large file with Multipart Upload feature</title>
      <dc:creator>Long Ngo</dc:creator>
      <pubDate>Sat, 11 Mar 2023 07:40:57 +0000</pubDate>
      <link>https://dev.to/longngo0924/aws-s3-upload-file-with-multipart-upload-feature-b03</link>
      <guid>https://dev.to/longngo0924/aws-s3-upload-file-with-multipart-upload-feature-b03</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;AWS S3 (Simple Storage Service) is an object storage service. It allows users to store and retrieve data from anywhere on the web. S3 is designed for businesses of all sizes and can store a virtually unlimited number of objects, including photos, videos, log files, backups, and other types of data.&lt;/p&gt;

&lt;p&gt;S3 multipart upload is a feature that allows you to upload large objects in parts (i.e., chunks) instead of uploading the entire object in a single HTTP request. It is particularly useful when uploading very large files. With multipart upload, you can upload individual parts of the object in parallel, which can significantly speed up the overall upload process.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;2.1. Provide access and create S3 bucket&lt;/strong&gt;&lt;br&gt;
First, we need to provide access to AWS for the IDE, where we are implementing this example. So, if you don't know anyway, you can refer to &lt;a href="https://dev.to/longngo0924/aws-system-manager-integrate-spring-boot-version-3-with-parameter-store-3me3"&gt;my previous post&lt;/a&gt;, in the 2.1 section.&lt;/p&gt;

&lt;p&gt;Next, we create a S3 bucket to store some files. I will name my bucket is multipart-uploading0924. You need named a difference name for your one. Because the S3 bucket name is unique in global.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpsyxe7ryrkb97pboz0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpsyxe7ryrkb97pboz0s.png" alt="S3 bucket"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.2. Setup Spring boot project&lt;/strong&gt;&lt;br&gt;
In this demo, we will create an API that can upload files to S3 bucket. You can use the Spring Start Project in Eclipse or Spring Initializer to create a Spring Boot project. After that, we need to add this dependency to POM file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.amazonaws&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;aws-java-sdk-s3&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.12.424&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2.3. Create API endpoints using to upload file&lt;/strong&gt;&lt;br&gt;
I want to show you the better performance when using multipart upload feature of S3. So we will create 2 function, one for normal uploading and another for multipart uploading.&lt;/p&gt;

&lt;p&gt;First, creating a S3 client that will interact with created bucket. You must change the name of the profile you already set up, and the region where you place the bucket in below code. In this case, &lt;code&gt;longngo0924&lt;/code&gt; is a profile that I will use to create S3 client and my bucket is placed in &lt;code&gt;ap-southeast-2&lt;/code&gt; region (Sydney)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

private AmazonS3 getS3ClientInstance() {
        if (s3client != null)
            return s3client;
        return AmazonS3ClientBuilder.standard().withCredentials(new ProfileCredentialsProvider("longngo0924"))
                .withRegion(Regions.AP_SOUTHEAST_2).build();
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We create a function in service layer with the name is &lt;code&gt;uploadFileV1&lt;/code&gt;. This function corresponding for normal upload type, and we write some code like this.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

public Map&amp;lt;String, String&amp;gt; uploadFileV1(MultipartFile multipartFile) throws IllegalStateException, IOException {

        Map&amp;lt;String, String&amp;gt; map = new HashMap&amp;lt;&amp;gt;();

        s3client = getS3ClientInstance();

        File file = convertToFile(multipartFile);

        PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, file.getName(), file);

        long start = System.currentTimeMillis();

        PutObjectResult result = s3client.putObject(putObjectRequest);

        long end = System.currentTimeMillis();
        log.info("Complete Normal Uploading {}s", (end - start) / 1000);

        if (result != null) {
            map.put("fileSize", String.valueOf(multipartFile.getSize() / 1000000) + "MB");
            map.put("time", String.valueOf((end - start) / 1000) + "s");
        } else {
            map.put("message", "Upload Failed");

        }
        return map;
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above function, we receive a multipart file from controller layer. Then, we convert it to file and construct a put object request call to S3. &lt;/p&gt;

&lt;p&gt;The second function will be named is &lt;code&gt;uploadFileV2&lt;/code&gt;. This function is similar with the first. The difference is we will upload file in parallel with multi thresh, every thresh upload a partial of our file. And the implemented code like this.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

public Map&amp;lt;String, String&amp;gt; uploadFileV2(MultipartFile multipartFile)
            throws IOException, AmazonServiceException, AmazonClientException, InterruptedException {
        Map&amp;lt;String, String&amp;gt; map = new HashMap&amp;lt;&amp;gt;();
        s3client = getS3ClientInstance();

        File file = convertToFile(multipartFile);

        TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3client)
                .withMultipartUploadThreshold((long) (50 * 1024 * 1025)).build();

        long start = System.currentTimeMillis();
        Upload result = tm.upload(bucketName, file.getName(), file);
        result.waitForCompletion();
        long end = System.currentTimeMillis();
        log.info("Complete Multipart Uploading {}s", (end - start) / 1000);

        map.put("fileSize", String.valueOf(multipartFile.getSize() / 1000000) + "MB");
        map.put("time", String.valueOf((end - start) / 1000) + "s");

        return map;

    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can see our file being cut into piece with size is 50 MB. One point in here, the upload method of &lt;code&gt;TransferManager&lt;/code&gt; is non-blocking and returns immediately. So we need to use &lt;code&gt;waitForCompletion&lt;/code&gt; method to wait the response if we need.&lt;/p&gt;

&lt;p&gt;Finally, we add some endpoints in controller layer to receive uploading request.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

@PostMapping("/v1/uploading")
    public Map&amp;lt;String, String&amp;gt; uploadFileV1(@RequestParam MultipartFile file)
            throws IllegalStateException, IOException {

        return uploadFileService.uploadFileV1(file);
    }

    @PostMapping("/v2/uploading")
    public Map&amp;lt;String, String&amp;gt; uploadFileV2(@RequestParam MultipartFile file)
            throws IllegalStateException, IOException,
            AmazonServiceException, AmazonClientException, InterruptedException {

        return uploadFileService.uploadFileV2(file);
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2.4. Upload file to S3 bucket&lt;/strong&gt;&lt;br&gt;
For testing purpose, we use Postman to test implemented APIs. First, we will use the endpoint corresponding for normal upload function and upload a 200 MB size file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaespwqvy4rme3z0j57o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaespwqvy4rme3z0j57o.png" alt="Normal uploading"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And for now, we test the endpoint for multipart upload&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lxfulrbeevc2mfq1s3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lxfulrbeevc2mfq1s3e.png" alt="Multipart uploading"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see with the same 200 MB file, the multipart upload function have the better performance than normal upload function. Speed is twice as fast in this example. And there are files we already uploaded by APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppzhqsl2kvpbmt0zbd9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppzhqsl2kvpbmt0zbd9v.png" alt="S3 file upload"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Summary
&lt;/h2&gt;

&lt;p&gt;Using S3 Multipart Uploading feature, the upload process will have better performance and this benefit is significant for an application need to upload large file regularly. Additionally, if any part of the upload fails, you only need to re-upload that specific part rather than the entire object. This can save time and bandwidth.&lt;/p&gt;

&lt;p&gt;The implementation of all these examples can be found in my &lt;a href="https://github.com/longngo0924/AWSLab/tree/master/Lab_Mutipart_Upload" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Coding :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Integrate Spring boot Version 3 with Parameter Store</title>
      <dc:creator>Long Ngo</dc:creator>
      <pubDate>Sat, 04 Mar 2023 13:02:16 +0000</pubDate>
      <link>https://dev.to/longngo0924/aws-system-manager-integrate-spring-boot-version-3-with-parameter-store-3me3</link>
      <guid>https://dev.to/longngo0924/aws-system-manager-integrate-spring-boot-version-3-with-parameter-store-3me3</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;AWS Parameter Store is a service provided by Amazon Web Services (AWS) that help us store and manage parameters and secrets for our applications. It provides a secure, centralized location for storing and accessing sensitive data such as database credentials, API keys, and configuration data.&lt;br&gt;
So we can use this service to manage multiple deployment property files for a Spring Boot Application.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;2.1. Provide access to the development environment&lt;/strong&gt;&lt;br&gt;
We need to provide access key and secret key for IDE (I will use Eclipse) to access the Parameter Store feature. You can refer this post &lt;a href="https://aws.amazon.com/eclipse/?nc1=h_ls" rel="noopener noreferrer"&gt;AWS Toolkit for Eclipse&lt;/a&gt; and follow these steps to install AWS Toolkit for Eclipse IDE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vcniuxylyvce2m6fuec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vcniuxylyvce2m6fuec.png" alt="AWS Toolkit for Eclipse"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After install successfully, we need to add key pair into AWS Toolkit. You can click the Icon of AWS Toolkit on the header and choose Preferences. If you are already had an access key on your computer, it will be display like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm92rab12rtczjd7lry9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm92rab12rtczjd7lry9k.png" alt="AWS Credential"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don't want to use this key, you can add manually another key by press the plus button next to the Global Configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.2. Create a Spring boot project&lt;/strong&gt;&lt;br&gt;
We can use Spring Starter Project from Spring Tool Suite (can find at Eclipse Marketplace) or &lt;a href="https://start.spring.io/" rel="noopener noreferrer"&gt;Spring Initializer&lt;/a&gt; to create a Spring boot project with &lt;u&gt;Spring boot version 3&lt;/u&gt;&lt;br&gt;
Then, we should add some dependency that need to use to integrate with Parameter Store.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-cloud-starter-bootstrap&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-cloud-starter-aws-parameter-store-config&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.2.6.RELEASE&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Carefully check you add these dependencies exactly. Because for a Spring Boot version 3, we should use &lt;strong&gt;spring-cloud-starter-bootstrap&lt;/strong&gt; to work with Parameter Store. If you don't use it, the issues relative to mismatch version can occur. And finally, we need to use &lt;strong&gt;spring-cloud-dependencies&lt;/strong&gt; because Parameter Store work on Spring Cloud.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;dependencyManagement&amp;gt;
        &amp;lt;dependencies&amp;gt;
            &amp;lt;dependency&amp;gt;
                &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;spring-cloud-dependencies&amp;lt;/artifactId&amp;gt;
                &amp;lt;version&amp;gt;2022.0.1&amp;lt;/version&amp;gt;
                &amp;lt;type&amp;gt;pom&amp;lt;/type&amp;gt;
                &amp;lt;scope&amp;gt;import&amp;lt;/scope&amp;gt;
            &amp;lt;/dependency&amp;gt;
        &amp;lt;/dependencies&amp;gt;
    &amp;lt;/dependencyManagement&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2.3. Read the parameter from Parameter Store&lt;/strong&gt;&lt;br&gt;
We need to define application name in application.properties file. For example is &lt;code&gt;spring.application.name=my-app&lt;/code&gt;. By default, Spring defined the syntax to work with parameter on Parameter Store with format:&lt;br&gt;
&lt;code&gt;/config/&amp;lt;name-of-the-spring-application&amp;gt;_&amp;lt;profile&amp;gt;/&amp;lt;parameter-name&amp;gt;&lt;/code&gt;&lt;br&gt;
If the default parameter convention does not fit our needs. We can create a &lt;code&gt;bootstrap.properties&lt;/code&gt; to override.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws.paramstore.enabled=true
aws.paramstore.prefix=/demo
aws.paramstore.name=my-app
aws.paramstore.profileSeparator=


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, we will create some parameter and add some code for testing.&lt;br&gt;
In the AWS Console of AWS System Manager, you can choose Parameter and click on Create Parameter button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6tbvrb9p44ipd5h7qai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6tbvrb9p44ipd5h7qai.png" alt="Parameter Store Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I already have some parameters in there, so I don't create anymore. You can create by input the name of parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv2bolvg85cfml813bzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv2bolvg85cfml813bzw.png" alt="Parameter name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the type, recommend for SecureString.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwuuekexjwjvfivfgkdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwuuekexjwjvfivfgkdn.png" alt="Parameter type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, input the value for this parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferxvgryibktexw6imrs0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferxvgryibktexw6imrs0.png" alt="Parameter value"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For testing purpose, the &lt;code&gt;@Value&lt;/code&gt; annotation will be used to resolve the value of parameters. We expect the &lt;code&gt;port&lt;/code&gt; variable will have value of &lt;code&gt;/demo/my-app/spring.port&lt;/code&gt; and &lt;code&gt;message&lt;/code&gt; variable corresponding for &lt;code&gt;/demo/my-app/spring.message&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

@SpringBootApplication
@Slf4j
public class DemoApplication implements CommandLineRunner {

    @Value("${spring.port}")
    private String port;

    @Value("${spring.message}")
    private String message;


    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);

    }

    @Override
    public void run(String... args) throws Exception {

        log.info("Resolved message port: {}", port);
        log.info("Resolved message parameter: {}", message);


    }

}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Starting the application and view on the console log, we can see the value that we created on AWS Parameter Store&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yzy4ocrecewlziretb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yzy4ocrecewlziretb3.png" alt="Log value of parameter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Summary
&lt;/h2&gt;

&lt;p&gt;Using the Parameter Store with Spring Boot application, we can manage the value of properties easily. It can work similar to read values from &lt;code&gt;application.properties&lt;/code&gt; file by &lt;code&gt;@Value&lt;/code&gt; annotation. Also help to reduce the mismatch issues between development processing in local environment and deploy processing on AWS Cloud.&lt;/p&gt;

&lt;p&gt;The implementation of all these examples can be found in my &lt;a href="https://github.com/longngo0924/AWSLab/tree/master/Lab_Parameter_Store" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Coding :)&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
