<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mostafa Dekmak</title>
    <description>The latest articles on DEV Community by Mostafa Dekmak (@dkmostafa).</description>
    <link>https://dev.to/dkmostafa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dkmostafa"/>
    <language>en</language>
    <item>
      <title>Intro to LangGraph: learn simple graph building, state management, LLM integration, and LangSmith monitoring (Part 1)</title>
      <dc:creator>Mostafa Dekmak</dc:creator>
      <pubDate>Wed, 07 Jan 2026 15:36:05 +0000</pubDate>
      <link>https://dev.to/dkmostafa/intro-to-langgraph-learn-the-basics-of-building-simple-graphs-managing-state-integrating-llms-3bal</link>
      <guid>https://dev.to/dkmostafa/intro-to-langgraph-learn-the-basics-of-building-simple-graphs-managing-state-integrating-llms-3bal</guid>
      <description>&lt;h3&gt;
  
  
  Introduction:
&lt;/h3&gt;

&lt;p&gt;We all want to leverage the power of AI and Large Language Models (LLMs) to build intelligent applications. However, before we can do that effectively, we need to understand the fundamentals. That is exactly where this article comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are We Going to Build?
&lt;/h2&gt;

&lt;p&gt;We will build a simple LangGraph agent that accepts a topic and performs structured research on it. The workflow consists of three main nodes:&lt;/p&gt;

&lt;p&gt;1️⃣ Planner Node&lt;br&gt;
Receives a topic and generates three research questions along with three research queries, each tackling a different meaningful angle of the topic.&lt;/p&gt;

&lt;p&gt;2️⃣ Research Node&lt;br&gt;
Takes each query individually, performs structured research, and retrieves relevant information.&lt;/p&gt;

&lt;p&gt;3️⃣ Answer Node&lt;br&gt;
Aggregates all collected research outputs and produces one final, well-structured response.&lt;/p&gt;

&lt;p&gt;This is our final goal—we will break it down in detail throughout this article.&lt;/p&gt;

&lt;p&gt;📂 GitHub Repository:&lt;br&gt;
&lt;a href="https://github.com/dkmostafa/langgraph-101" rel="noopener noreferrer"&gt;https://github.com/dkmostafa/langgraph-101&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fce3y1fqz8m4xbowtdvc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fce3y1fqz8m4xbowtdvc1.png" alt=" " width="800" height="822"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before getting started, make sure you have the following:&lt;/p&gt;

&lt;p&gt;Basic Python knowledge and environment setup&lt;/p&gt;

&lt;p&gt;Groq API Key (or any other LLM provider).&lt;br&gt;
Groq is recommended because it provides a generous free tier and supports multiple open-source models.&lt;br&gt;
Get your key here:&lt;br&gt;
&lt;a href="https://console.groq.com/home" rel="noopener noreferrer"&gt;https://console.groq.com/home&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LangSmith API Key (optional)&lt;br&gt;
Useful for monitoring, tracing, and observability.&lt;br&gt;
Register here:&lt;br&gt;
&lt;a href="https://smith.langchain.com/" rel="noopener noreferrer"&gt;https://smith.langchain.com/&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Now Let’s Dive In
&lt;/h2&gt;

&lt;p&gt;Building an agent can initially feel intimidating. There are many new concepts, and it is not always obvious where to start or what tools to use. This article will simplify that journey.&lt;/p&gt;

&lt;p&gt;We will walk through the essential building blocks step-by-step. Each topic will be explained with a working example from the repository.&lt;/p&gt;

&lt;p&gt;You will learn:&lt;/p&gt;

&lt;p&gt;Creating a simple Graph Node&lt;/p&gt;

&lt;p&gt;Building an agent with edges and conditional edges&lt;/p&gt;

&lt;p&gt;Writing simple prompts to the LLM&lt;/p&gt;

&lt;p&gt;Managing graph state to let data flow between nodes&lt;/p&gt;

&lt;p&gt;Integrating LLMs using Groq&lt;/p&gt;

&lt;p&gt;Using structured output from LLM responses&lt;/p&gt;

&lt;p&gt;Monitoring and tracing with LangSmith&lt;/p&gt;

&lt;p&gt;Adding unit tests for individual nodes&lt;/p&gt;

&lt;p&gt;Adding integration tests for the graph&lt;/p&gt;

&lt;p&gt;Adding end-to-end tests for the entire application. &lt;/p&gt;
&lt;h4&gt;
  
  
  Before building the full agent, let’s start with the most fundamental unit in LangGraph:
&lt;/h4&gt;
&lt;h4&gt;
  
  
  1 . THE Node.
&lt;/h4&gt;

&lt;p&gt;A node is simply a function that contains a piece of logic. In many cases, that logic involves running an LLM chain. For example, here is our Planner Node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def planner_node(state: AgentState, llm) -&amp;gt; AgentState:
    chain = planner_prompt_template | llm | planner_parser
    response: PlannerResultOutput = chain.invoke(
        {"user_input": state.user_input}
    )

    state.planned_items = response
    return state
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break this down:&lt;/p&gt;

&lt;p&gt;planner_prompt_template – This is the prompt that defines the instructions we send to the LLM.&lt;/p&gt;

&lt;p&gt;llm – This is the language model we are using (Groq in our case, but it can be any supported provider).&lt;/p&gt;

&lt;p&gt;planner_parser – This defines the structured output format we expect from the LLM so that we can safely use the response later.&lt;/p&gt;

&lt;p&gt;The chain combines all three components, and when we call chain.invoke(), we pass:&lt;/p&gt;

&lt;p&gt;user_input – The topic provided by the user. We will see how this gets injected into the prompt shortly.&lt;/p&gt;

&lt;p&gt;Finally:&lt;/p&gt;

&lt;p&gt;state – This is the shared context of our agent. All data flows through state between different nodes. It is how LangGraph manages memory and continuity across the workflow. We will explore it in more detail soon.&lt;/p&gt;

&lt;p&gt;This is our first step toward building an intelligent research agent: defining clear, isolated logic inside a node.&lt;br&gt;
Note : Unit test is recommended for nodes , unit test are avialable in the github repo . &lt;/p&gt;
&lt;h4&gt;
  
  
  2 . The Prompt
&lt;/h4&gt;

&lt;p&gt;Now let’s look at how we build the prompt for our Planner Node. This prompt includes:&lt;/p&gt;

&lt;p&gt;Dynamic variables (like user_input)&lt;/p&gt;

&lt;p&gt;Structured output instructions (format_instructions)&lt;/p&gt;

&lt;p&gt;A system message with rules for the LLM&lt;/p&gt;

&lt;p&gt;A human message that injects the actual topic&lt;/p&gt;

&lt;p&gt;Here is the setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;planner_parser = PydanticOutputParser(pydantic_object=PlannerResultOutput)

system_message = SystemMessagePromptTemplate.from_template(
    """LONG PROMPT
{format_instructions}
"""
)

human_message = HumanMessagePromptTemplate.from_template(
    "{user_input}"
)

planner_prompt_template = ChatPromptTemplate.from_messages(
    [
        system_message,
        human_message
    ]
).partial(
    format_instructions=planner_parser.get_format_instructions()
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break it down:&lt;/p&gt;

&lt;p&gt;planner_parser = PydanticOutputParser(...)&lt;br&gt;
This defines the structured format we expect back from the LLM.&lt;br&gt;
PlannerResultOutput is a simple Pydantic model that specifies the schema. Check the repository for full details.&lt;/p&gt;

&lt;p&gt;{user_input} and {format_instructions}&lt;br&gt;
Any dynamic values inside the prompt are wrapped in {}.&lt;br&gt;
In this case:&lt;/p&gt;

&lt;p&gt;user_input → the topic provided by the user&lt;/p&gt;

&lt;p&gt;format_instructions → automatically generated by the Pydantic parser to enforce structured output&lt;/p&gt;

&lt;p&gt;By combining both, we ensure the LLM not only understands the task but also returns a response in a predictable, machine-friendly structure.&lt;/p&gt;

&lt;p&gt;For now, this is enough understanding about prompts. We will explore prompt design and prompt engineering in much deeper detail in future articles.&lt;/p&gt;

&lt;p&gt;Note : &lt;br&gt;
After writing the first 2 parts , I noticed how long it is , and I dont want to make it any longer , will publish these 2 main points , and the rest to be publish soon , either way , the code is avialable for reference .&lt;/p&gt;

</description>
      <category>langgraph</category>
      <category>agents</category>
      <category>python</category>
      <category>llm</category>
    </item>
    <item>
      <title>Building an Image Classifier API with FastAPI, TensorFlow, and MobileNetV2 Using Clean Architecture Principles</title>
      <dc:creator>Mostafa Dekmak</dc:creator>
      <pubDate>Sat, 12 Jul 2025 15:28:10 +0000</pubDate>
      <link>https://dev.to/dkmostafa/building-an-image-classifier-api-with-fastapi-tensorflow-and-mobilenetv2-using-clean-architecture-1994</link>
      <guid>https://dev.to/dkmostafa/building-an-image-classifier-api-with-fastapi-tensorflow-and-mobilenetv2-using-clean-architecture-1994</guid>
      <description>&lt;h2&gt;
  
  
  github link :  &lt;a href="https://github.com/dkmostafa/fast-api-image-classification-sample" rel="noopener noreferrer"&gt;https://github.com/dkmostafa/fast-api-image-classification-sample&lt;/a&gt;
&lt;/h2&gt;

&lt;h1&gt;
  
  
  Introduction :
&lt;/h1&gt;

&lt;p&gt;This article will show how to build a simple image classifier using a pre-trained MobileNetV2 model, following Clean Architecture principles.&lt;br&gt;
Why do this:&lt;br&gt;
It allows us to explore the power of AI and apply it using FastAPI in a way that is easy to scale, maintain, and extend.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Article Will Cover:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Setting up a FastAPI project for serving an AI model&lt;/li&gt;
&lt;li&gt;Using a pre-trained Image Classifier MobileNetV2 model from TensorFlow for image classification&lt;/li&gt;
&lt;li&gt; Applying Clean Architecture principles to structure the application&lt;/li&gt;
&lt;/ul&gt;




&lt;ul&gt;
&lt;li&gt;Writing a testable code using:&lt;/li&gt;
&lt;li&gt;Domain entities and use cases&lt;/li&gt;
&lt;li&gt;Interface adapters (e.g., FastAPI routes)&lt;/li&gt;
&lt;li&gt;Infrastructure layer (e.g., TensorFlow model loading and prediction logic)&lt;/li&gt;
&lt;li&gt;Handling image uploads via FastAPI endpoints&lt;/li&gt;
&lt;li&gt; Converting uploaded images into model-compatible format using PIL and NumPy&lt;/li&gt;
&lt;li&gt; Making predictions using the MobileNetV2 model and returning results as JSON&lt;/li&gt;
&lt;li&gt; Writing unit and integration tests for the application&lt;/li&gt;
&lt;li&gt; (Optional) Dependency Injection for clean separation of concerns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole project code is in github link provided above. &lt;/p&gt;

&lt;h3&gt;
  
  
  How is the our application architutured ?  :
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure Layer : 
This layer will contains all our concrete model code and where our model / models will reside. &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Domain layer : &lt;br&gt;
This layer is responsible for our bussnis logic such as the data entites , interfaces ( ImageClassificationModelInterface )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use cases Layer : &lt;br&gt;
This layer is the one that wires our logic and diffrent infrastructure together to do the job of a specific use case , example ( indetifying the image ) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;presentation layer ( main.py ) :&lt;br&gt;
ofcourse this shouldnt be in the main file , but the main file is what fires its on , but since this is a simple project its enough to fit it in . &lt;br&gt;
the presentation layer is how we are going to present our application , in our case , its fast api . and our fast api application will import our use_cases , to later translate them into a json compatible rest API's.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This blog is just a quick overview on how writing our simple AI application , more inside the github repository such as , our unit tests and integration tests.&lt;/p&gt;

</description>
      <category>fastapi</category>
      <category>ai</category>
      <category>tensorflow</category>
      <category>cleanarchitecture</category>
    </item>
    <item>
      <title>Writing Integration And Unit Tests for a Simple Fast API application using Pytest</title>
      <dc:creator>Mostafa Dekmak</dc:creator>
      <pubDate>Tue, 19 Nov 2024 12:58:04 +0000</pubDate>
      <link>https://dev.to/dkmostafa/writing-integration-and-unit-tests-for-a-simple-fast-api-application-using-pytest-2e8i</link>
      <guid>https://dev.to/dkmostafa/writing-integration-and-unit-tests-for-a-simple-fast-api-application-using-pytest-2e8i</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Python is a great language for building various types of applications, especially in today's landscape where machine learning and AI are rapidly advancing. With this growth in services, there’s a strong need for well-designed, maintainable, and scalable APIs. That’s where FastAPI comes in, a powerful async web API framework for Python that's both simple and robust &lt;code&gt;https://fastapi.tiangolo.com/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In this article, I won’t be covering the details of how FastAPI works or how to build APIs with it. Instead, the focus will be on writing integration and unit tests for FastAPI applications using Pytest. This guide is ideal for those familiar with Python and frameworks like Flask, Django, or other web frameworks (e.g., NestJS, Express, Spring Boot) who want to dive into building tests with Python and Pytest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github repo &lt;a href="https://github.com/dkmostafa/fast-api-sample/tree/unit-testing-sample" rel="noopener noreferrer"&gt;https://github.com/dkmostafa/fast-api-sample/tree/unit-testing-sample&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  First thing first lets Install the required requirements and that will be used for the testing purpose :
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;pytest : The main testing framework for creating integration and unit test in python&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pytest-asyncio : async io pytest to test asynchronous calls ( ex : API calls ) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pytest-env : imitating environment variables for testing puproses ( optional )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pytest-cov : library to produce the coverage test of our app ( optional if intrested in coverage test )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Faker : a powerful and easy seeding library to seed our database &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So lets start by writing our first integration test In Part 1 and Part 2 we will write a Unit Test :
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Part 1
&lt;/h2&gt;

&lt;p&gt;I will create and test a simple API endpoint : &lt;code&gt;/user&lt;/code&gt; that fetches me all the users in my database.&lt;/p&gt;

&lt;p&gt;Before start with writing our unit test that tests that endpoints , we have to make sure that we isolate our testing database from the production one , &lt;strong&gt;WE DONT WANT TO PERFORM ANY OPERATIONS ON OUR MAIN DATABASE !!!&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Preparation :
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Creating the database and models :
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@pytest.fixture()
def session():

    Base.metadata.create_all(engine)
    try:
        with get_session() as session:
            yield session
    finally:
        Base.metadata.drop_all(engine)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in this fixture we are creating a separate database session for our tests , that's create before running any tests ( by passing the session as an argument ) and dropping it all after our designated test are done . &lt;/p&gt;

&lt;h4&gt;
  
  
  Seeding our database :
&lt;/h4&gt;

&lt;p&gt;Since isolation is an important part of our testing , again *&lt;em&gt;WE DONT WANT TO USE ANY PRODUCTION DATABASE *&lt;/em&gt;  so we are going to seed our database with fake data , hence using the Faker library :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@pytest.fixture()
def seeded_users_session(session):
    faker = Faker()

    user_repo = UserRepository()
    for i in range(1, 6):
        seeded_user = {
            "username": faker.user_name(),
            "name": faker.name()
        }
        user_repo.save(seeded_user)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE ! : how I passed the session fixture to the &lt;code&gt;seeded_users_session&lt;/code&gt; fixture function.&lt;/p&gt;

&lt;p&gt;So now since our database preparation is done , lets dive into the test finally.&lt;/p&gt;

&lt;h3&gt;
  
  
  First Step :
&lt;/h3&gt;

&lt;p&gt;Defining my test , and what am expecting from my test , a very popular approach when writing tests is following a TDD approach where we write our tests before writing the implementation , From here I want to create an api that do the following : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetches me list of users and the API enpoint is &lt;code&gt;/user&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;make sure that my status code response is 200&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;our test will be as simple as translating the above text into a test code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def test_get_users(seeded_users_session):
    response = user_client.get("/user") 
    json_response = response.json()
    assert response.status_code == 200
    assert len(json_response) &amp;gt; 1
    assert isinstance(json_response, list)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In short : passing the &lt;code&gt;seeded_users_session&lt;/code&gt;fixture to this function so it used the above seeded database.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calling the API &lt;/li&gt;
&lt;li&gt;Parsing the json response&lt;/li&gt;
&lt;li&gt;Asserting the status code&lt;/li&gt;
&lt;li&gt;asserting that the list is greater than 1.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Full Integration Test File :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pytest
from faker import Faker
from fastapi.testclient import TestClient
from database.db_engine import Base, get_session, engine
from database.models.user_model import UserModel
from database.repositories.user_repository import UserRepository
from ...models.user_response_models import CreateUserResponse

from ...user_routes import user_router


@pytest.fixture()
def session():

    Base.metadata.create_all(engine)
    try:
        with get_session() as session:
            yield session
    finally:
        Base.metadata.drop_all(engine)


@pytest.fixture()
def seeded_users_session(session):
    faker = Faker()

    user_repo = UserRepository()
    for i in range(1, 6):
        seeded_user = {
            "username": faker.user_name(),
            "name": faker.name()
        }
        user_repo.save(seeded_user)


user_client = TestClient(user_router)


def test_create_user_success(session):
    response = user_client.post("/user", json={
        "name": "test_user_creation",
        "username": "test_user_creation@gmail.com",
    })

    json_response = response.json()
    CreateUserResponse(**json_response)
    assert response.status_code == 200


def test_get_users(seeded_users_session):
    response = user_client.get("/user")
    json_response = response.json()
    assert response.status_code == 200
    assert len(json_response) &amp;gt; 1
    assert isinstance(json_response, list)


def test_get_user_by_id(seeded_users_session):
    response = user_client.get("/user/1")
    json_response = response.json()
    UserModel(**json_response)
    assert response.status_code == 200

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Second Step :
&lt;/h3&gt;

&lt;p&gt;Create the Api :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@user_router.get("/")
def get_users():
    res = user_service.get_users()
    return res
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;user_router is an instance of the APIRouter to organise my FastApi routes.&lt;/li&gt;
&lt;li&gt;user_service.get_users() is a simple user fetch query written in sqlalchemy .
( check the repo for more details)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;add your logic inside the get_users in iterations till your test is Passed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next step will be writing a simple Unit Test :
&lt;/h2&gt;

&lt;h1&gt;
  
  
  Part 2 :
&lt;/h1&gt;

&lt;p&gt;Unit test differ from integration , in the integration we are testing the full API journey , while in the unit test we are testing a single very scoped function while we isolate it from external influence and focuses on testing our business inside our services, ( database , or others ) for this purpose we will use Mock .&lt;/p&gt;

&lt;p&gt;so lets start by writing our test : &lt;br&gt;
in my test I will test the get_user(self, user_id: int) in 2 scenarios , first scenario if it returns me an existing user and assert that this user in of an instance of the UserModel , second scenario is if the user not found and it throws me an error. &lt;/p&gt;

&lt;p&gt;First things first is by preparing my service to the test :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@pytest.fixture
def user_service():
    mock_user_repository = MagicMock()
    mock_user_repository.get_by_id.return_value = UserModel(
        name=faker.name(),
        username=faker.user_name()
    )

    user_service = UserService()
    user_service.user_repository = mock_user_repository

    return user_service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;as we did in the integration test , I created a fixture where i mocked the user_repo so it wont call a database call instead it will mock the data am assuming it will return&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def test_get_user_success(user_service):
    response = user_service.get_user(1)
    assert isinstance(response, UserModel)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here am calling my function and asserting its output . &lt;/p&gt;

&lt;p&gt;Second scenario if the user is not found . a diffrent mock is given to the user_service , where am assuming no user will be found and asserting that my error is raised.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def test_get_user_not_found():
    mock_user_repository = MagicMock()
    mock_user_repository.get_by_id.return_value = None
    user_service = UserService()
    user_service.user_repository = mock_user_repository

    with pytest.raises(NotFoundException) as excinfo:
        user_service.get_user(1)
    assert str(excinfo.value) == 'User does not exist'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full Unit test file here :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pytest
from faker import Faker
from unittest.mock import MagicMock

from Exceptions.not_found_exception import NotFoundException
from database.models.user_model import UserModel
from modules.User.services.user_service import UserService

faker = Faker()


@pytest.fixture
def user_service():
    mock_user_repository = MagicMock()
    mock_user_repository.get_by_id.return_value = UserModel(
        name=faker.name(),
        username=faker.user_name()
    )

    user_service = UserService()
    user_service.user_repository = mock_user_repository

    return user_service


def test_get_user_success(user_service):
    response = user_service.get_user(1)
    assert isinstance(response, UserModel)


def test_get_user_not_found():
    mock_user_repository = MagicMock()
    mock_user_repository.get_by_id.return_value = None
    user_service = UserService()
    user_service.user_repository = mock_user_repository

    with pytest.raises(NotFoundException) as excinfo:
        user_service.get_user(1)
    assert str(excinfo.value) == 'User does not exist'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Additional Part :
&lt;/h2&gt;

&lt;p&gt;How to make sure that have a good coverage of my code , for this i will run :&lt;br&gt;
&lt;code&gt;pytest --cov&lt;/code&gt; that gives my a coverage report for my code &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5djwpeh9npljexp7ilq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5djwpeh9npljexp7ilq8.png" alt="Image description" width="774" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Writing unit and integration test is essential as writing the code it self . ignoring tests means that am ignoring the quality of the code am writing . The repo contains more unit and integration tests you can check.&lt;br&gt;
Have a nice testing :D &lt;/p&gt;

</description>
      <category>pytest</category>
      <category>unittest</category>
      <category>integrationtest</category>
      <category>fastapi</category>
    </item>
    <item>
      <title>Implementing Dependency Injection in Python Flask Using Dependency Injector</title>
      <dc:creator>Mostafa Dekmak</dc:creator>
      <pubDate>Mon, 01 Jul 2024 12:05:54 +0000</pubDate>
      <link>https://dev.to/dkmostafa/implementing-dependency-injection-in-python-flask-using-dependency-injector-4j7i</link>
      <guid>https://dev.to/dkmostafa/implementing-dependency-injection-in-python-flask-using-dependency-injector-4j7i</guid>
      <description>&lt;p&gt;github project link : &lt;a href="https://github.com/dkmostafa/python-flask-dependency-injector-sample"&gt;https://github.com/dkmostafa/python-flask-dependency-injector-sample&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Objective :
&lt;/h2&gt;

&lt;p&gt;The objective is to apply the dependency injection pattern to our Python Flask application using the dependency_injector package.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation :
&lt;/h2&gt;

&lt;p&gt;Begin by installing the dependency-injector package, which is listed in the requirements.txt file&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating our Application Container :
&lt;/h2&gt;

&lt;p&gt;To begin, we create an application container to manage our required services, data repositories, and database configuration. Below is a sample code snippet:&lt;/p&gt;

&lt;p&gt;employee_container.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from dependency_injector import containers, providers
from ..db.database import Database
from ..db.repositories.employee_repository import EmployeeRepository
from .controllers.employee_controller import EmployeeController
from .services.employee_service import EmployeeService

class EmployeeRepositories(containers.DeclarativeContainer):
    db_url = "sqlite:///employees.db"
    db = providers.Singleton(Database, db_url=db_url)
    db_session = db.provided.session
    employee_repository = providers.Factory(EmployeeRepository, session_factory=db_session)

class EmployeeServices(containers.DeclarativeContainer):
    employee_service = providers.Factory(EmployeeService, employee_repository=EmployeeRepositories.employee_repository)

class EmployeeControllers(containers.DeclarativeContainer):
    employee_controller = providers.Factory(EmployeeController, employee_service=EmployeeServices.employee_service)

class EmployeeContainer(containers.DeclarativeContainer):
    repositories = providers.Container(EmployeeRepositories)
    services = providers.Container(EmployeeServices)
    controllers = providers.Container(EmployeeControllers)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code organizes our application's components into containers (EmployeeRepositories, EmployeeServices, EmployeeControllers, and EmployeeContainer). It sets up dependencies such as database connections (Database) and services (EmployeeService) using the dependency_injector framework in Python Flask.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wiring the created container to our application with the routes file:
&lt;/h3&gt;

&lt;p&gt;src/app.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask
from src.modules.employee import employee_routes
from src.modules.employee.employee_container import EmployeeContainer

app = Flask(__name__)

app.register_blueprint(employee_routes.employee_blueprint)

employee_container = EmployeeContainer()

employee_container.wire(
    modules=[employee_routes.__name__]
)

if __name__ == "__main__":
    app.run(debug=True)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After wiring our container to our app and routes created , we can start injecting the created services , repositories in our application as following : &lt;/p&gt;

&lt;p&gt;Injecting controller and using it  :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;employee_controller: EmployeeController = Provide[EmployeeContainer.controllers.employee_controller]

employee_controller.create_employee(body)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Injecting EmployeeService into EmployeeController :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class EmployeeController:

    def __init__(self, employee_service: EmployeeService):
        self.employee_service = employee_service
        pass;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing when using dependency injection :
&lt;/h3&gt;

&lt;p&gt;Using dependency injection simplifies and enhances the manageability of testing and mocking each service in our application.&lt;/p&gt;

&lt;p&gt;With dependency injection, our services and controllers receive their dependencies through constructors or method parameters, making it straightforward to replace real dependencies with mocks or stubs during testing. This approach improves test isolation and ensures that tests focus solely on the behavior of the unit under test.&lt;/p&gt;

&lt;p&gt;Here’s an example of how testing might look with dependency injection in a Python Flask application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pytest
from unittest.mock import MagicMock
from ..employee_service import EmployeeService
from ...controllers.dtos.employee_controller_dto import CreateEmployeeDTO


@pytest.fixture
def employee_service():
    mock_employee_repository = MagicMock()

    mock_employee_repository.save.return_value = {
        "id": 1,
        "name": "test",
        "username": "test",
        "email": "test",
    }
    mock_employee_repository.get_all_employees.return_value = [
        {
            "email": "test",
            "id": 1,
            "name": "test",
            "username": "test"
        }
    ]

    employee_service = EmployeeService(mock_employee_repository)

    return employee_service


def test_create_employee(employee_service):
    create_employee_mock_input = CreateEmployeeDTO(
        name="test",
        username="test",
        email="test",
    )
    res = employee_service.create_employee(create_employee_mock_input)

    assert res == {
        "id": 1,
        "name": "test",
        "username": "test",
        "email": "test",
    }

    assert True


def test_get_employees(employee_service):
    res = employee_service.get_all_employees()
    assert res == [
        {
            "email": "test",
            "id": 1,
            "name": "test",
            "username": "test"
        }
    ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>python</category>
      <category>dependencyinversion</category>
      <category>flask</category>
      <category>dependencyinjector</category>
    </item>
    <item>
      <title>A Guide to Deploying Nest.js Applications with AWS CodePipeline and ECS Fargate</title>
      <dc:creator>Mostafa Dekmak</dc:creator>
      <pubDate>Fri, 05 Apr 2024 02:09:22 +0000</pubDate>
      <link>https://dev.to/dkmostafa/a-guide-to-deploying-nestjs-applications-with-aws-codepipeline-and-ecs-fargate-20fe</link>
      <guid>https://dev.to/dkmostafa/a-guide-to-deploying-nestjs-applications-with-aws-codepipeline-and-ecs-fargate-20fe</guid>
      <description>&lt;p&gt;Github Link : &lt;a href="https://github.com/dkmostafa/dev-samples"&gt;https://github.com/dkmostafa/dev-samples&lt;/a&gt;&lt;br&gt;
AWS-CDK Infrastructure Branch : &lt;a href="https://github.com/dkmostafa/dev-samples/tree/infra"&gt;https://github.com/dkmostafa/dev-samples/tree/infra&lt;/a&gt;&lt;br&gt;
NestJs Application Branch : &lt;a href="https://github.com/dkmostafa/dev-samples/tree/nestjs-application"&gt;https://github.com/dkmostafa/dev-samples/tree/nestjs-application&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Goal :
&lt;/h2&gt;

&lt;p&gt;The goal of deploying a Nest.js application using AWS CodePipeline and AWS ECS Fargate is to establish an efficient and automated deployment pipeline. This process aims to simplify the deployment workflow, ensure consistency across environments, and enhance scalability by utilizing the power of AWS services. By achieving this goal, developers can focus more on coding and less on manual deployment tasks, ultimately leading to faster delivery of updates and improved application reliability.&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction :
&lt;/h2&gt;

&lt;p&gt;In the world of modern application development, deploying applications efficiently and reliably is crucial. In this article, we will delve into the process of deploying a Nest.js application using powerful AWS services: AWS CodePipeline and AWS ECS Fargate. These tools, when combined, provide a streamlined and automated deployment pipeline that simplifies the deployment workflow and enhances scalability.&lt;/p&gt;

&lt;p&gt;AWS CodePipeline allows us to create automated release pipelines that model the steps required to release our software, from source code to production. It facilitates continuous integration and delivery (CI/CD), automating the build, test, and deployment phases.&lt;/p&gt;

&lt;p&gt;AWS ECS (Elastic Container Service) Fargate, on the other hand, offers a fully managed container orchestration service. It allows us to deploy and manage containers without having to manage the underlying infrastructure. With Fargate, we can focus on our application logic while AWS takes care of the container deployment, scaling, and monitoring.&lt;/p&gt;

&lt;p&gt;Throughout this article, we will explore how to set up these tools to deploy a Nest.js application seamlessly. We'll dive into the code, discussing the use of TypeScript and AWS CDK (Cloud Development Kit) to define our infrastructure as code. By the end, you'll have a clear understanding of how to leverage AWS CodePipeline and ECS Fargate to automate and scale your Nest.js applications effectively. Let's get started!&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites :
&lt;/h3&gt;

&lt;p&gt;Before we begin, make sure you have the following:&lt;/p&gt;

&lt;p&gt;AWS account with necessary permissions&lt;br&gt;
Node.js installed on your local machine&lt;br&gt;
Basic understanding of NestJs, AWS CDK, and CodePipeline&lt;br&gt;
Install AWS-CDK cli on your local machine : &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html"&gt;https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  First Step : Creating an Amazon ECR Repository with AWS CDK
&lt;/h3&gt;

&lt;p&gt;The first step in our deployment process is to create an Amazon ECR (Elastic Container Registry) repository using AWS CDK (Cloud Development Kit). This repository will serve as the container image storage for our Nest.js application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    private createEcrImage(_props:ICreateEcrImage):Repository{
        const repository: Repository = new Repository(this, _props.id, {
            imageScanOnPush: true,
            repositoryName:_props.name
        });
        return repository;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Second Step : Setting Up AWS CodePipeline for Continuous Integration
&lt;/h3&gt;

&lt;p&gt;After creating our Amazon ECR repository, the next step in deploying our Nest.js application with AWS CodePipeline and ECS Fargate is to establish the continuous integration (CI) pipeline. This pipeline will automate the process of building and packaging our application's code whenever changes are pushed to the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;createBuildPipeline(_props: IPipelineConfig, account: string, region: string, repo: Repository) {
    const outputSources: Artifact = new Artifact();
    const outputWebsite: Artifact = new Artifact();

    // Source Action: GitHub Source
    const sourceAction: GitHubSourceAction = new GitHubSourceAction({
        actionName: 'GitHub_Source',
        owner: _props.githubConfig.owner,
        repo: _props.githubConfig.repo,
        oauthToken: SecretValue.secretsManager(_props.githubConfig.oAuthSecretManagerName),
        output: outputSources,
        branch: _props.githubConfig.branch,
        trigger: GitHubTrigger.WEBHOOK
    });

    // CodeBuild Project for Build
    const buildProject = new PipelineProject(this, "BuildWebsite", {
        projectName: "BuildWebsite",
        buildSpec: BuildSpec.fromSourceFilename(_props.buildSpecLocation),
        environment: {
            buildImage: LinuxBuildImage.STANDARD_7_0,
            environmentVariables: {
                AWS_REGION: { value: region },
                AWS_ACCOUNT: { value: account },
                ECR_REPO: { value: repo.repositoryName },
            },
        },
    });

    // Allow CodeBuild to interact with ECR
    buildProject.addToRolePolicy(new PolicyStatement({
        resources: ["*"],
        actions: ['ecr:*'],
        effect: Effect.ALLOW
    }));

    // CodeBuild Action
    const buildAction: CodeBuildAction = new CodeBuildAction({
        actionName: "BuildWebsite",
        project: buildProject,
        input: outputSources,
        outputs: [outputWebsite],
    });

    // Define the Pipeline
    const pipeline: Pipeline = new Pipeline(this, _props.pipelineId, {
        pipelineName: _props.pipelineName,
        stages: [
            {
                stageName: "Source",
                actions: [sourceAction],
            },
            {
                stageName: "Build",
                actions: [buildAction],
            },
        ]
    });

    // Add Manual Approval Stage
    const approveStage = pipeline.addStage({ stageName: 'Approve' });
    const manualApprovalAction = new ManualApprovalAction({
        actionName: 'Approve',
    });
    approveStage.addAction(manualApprovalAction);

    return {
        pipeline: pipeline,
        output: outputSources
    };
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For our code build , a buildspec.yml file should be defined :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --version
      - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com
      - REPOSITORY_URI=$AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_REPO
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $REPOSITORY_URI:latest nestjs-app/.
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation of the Code:&lt;br&gt;
Method Signature:&lt;/p&gt;

&lt;p&gt;createBuildPipeline(_props: IPipelineConfig, account: string, region: string, repo: Repository) { ... }&lt;br&gt;
This method takes in the configuration details for the pipeline (_props), AWS account ID (account), AWS region (region), and the ECR repository (repo) as arguments.&lt;br&gt;
Artifact Definition:&lt;/p&gt;

&lt;p&gt;const outputSources: Artifact = new Artifact();&lt;br&gt;
const outputWebsite: Artifact = new Artifact();&lt;br&gt;
Artifacts represent the inputs and outputs of the pipeline stages. Here, we create two artifacts: outputSources for source code and outputWebsite for the built application.&lt;br&gt;
Source Action: GitHub Source:&lt;/p&gt;

&lt;p&gt;We define a GitHubSourceAction to fetch the source code from a GitHub repository.&lt;br&gt;
owner, repo, oauthToken: GitHub repository details.&lt;br&gt;
output: The artifact where the fetched source code will be stored.&lt;br&gt;
branch: The branch of the GitHub repository to monitor for changes.&lt;br&gt;
CodeBuild Project:&lt;/p&gt;

&lt;p&gt;We create a PipelineProject named "BuildWebsite" for building our Nest.js application.&lt;br&gt;
buildSpec: Specifies the build specifications from a file.&lt;br&gt;
buildImage: The Docker image used for the build process.&lt;br&gt;
environmentVariables: Sets environment variables required for the build, including AWS region, account ID, and ECR repository name.&lt;br&gt;
CodeBuild Permissions:&lt;/p&gt;

&lt;p&gt;We add a policy to the CodeBuild project allowing it to interact with the ECR repository.&lt;br&gt;
This policy grants necessary permissions for CodeBuild to push the built Docker image to the ECR repository.&lt;br&gt;
CodeBuild Action:&lt;/p&gt;

&lt;p&gt;We create a CodeBuildAction named "BuildWebsite" using the buildProject.&lt;br&gt;
This action triggers the build process defined in the buildProject.&lt;br&gt;
It takes outputSources as input and produces outputWebsite.&lt;br&gt;
Pipeline Definition:&lt;/p&gt;

&lt;p&gt;We define the Pipeline with the specified stages and actions.&lt;br&gt;
The "Source" stage fetches the source code from GitHub.&lt;br&gt;
The "Build" stage executes the buildProject to build the Nest.js application.&lt;br&gt;
Manual Approval Stage:&lt;/p&gt;

&lt;p&gt;We add an "Approve" stage that requires manual approval before proceeding.&lt;br&gt;
This allows for human intervention to ensure control over the deployment process.&lt;br&gt;
Return:&lt;/p&gt;

&lt;p&gt;Finally, we return an object containing the created pipeline and the outputSources artifact.&lt;br&gt;
By executing this code, we create an AWS CodePipeline that fetches the source code from a GitHub repository, builds the Nest.js application using CodeBuild, and produces an artifact. The pipeline also includes a manual approval stage for added control over the deployment process.&lt;/p&gt;

&lt;p&gt;Now that our CodePipeline is set up, we can move forward to the next step of deploying our Nest.js application to AWS ECS Fargate. Stay tuned for the continuation of our deployment journey!&lt;/p&gt;
&lt;h3&gt;
  
  
  Third Step : Setting Up AWS ECS Cluster with Fargate Service
&lt;/h3&gt;

&lt;p&gt;Now that we have our CodePipeline ready to build our Nest.js application, the next step is to create an AWS ECS (Elastic Container Service) cluster along with an ECS Fargate service. This will allow us to deploy and manage our containerized application seamlessly.&lt;/p&gt;

&lt;p&gt;in this step, we define the createEcs method within our EcsApplicationConstruct class. This method handles the creation of the ECS cluster, Fargate task definition, and the Fargate service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private createEcs(_props: ICreateEcs, _account: string, _region: string, _ecrName: string) {
    // Create Execution Role for ECS Task
    const executionRole: Role = new Role(this, _props.executionRole.id, {
        assumedBy: new ServicePrincipal("ecs-tasks.amazonaws.com"),
        roleName: _props.executionRole.name
    });

    // Define Permissions for Execution Role
    executionRole.addToPolicy(new PolicyStatement({
        resources: ["*"],
        actions: [
            "ecr:GetAuthorizationToken",
            "ecr:BatchCheckLayerAvailability",
            "ecr:GetDownloadUrlForLayer",
            "ecr:BatchGetImage",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
        ],
        effect: Effect.ALLOW
    }));

    // Define Fargate Task Definition
    const taskDefinition: FargateTaskDefinition = new FargateTaskDefinition(
        this,
        _props.taskDefinitionId,
        {
            executionRole: executionRole,
            runtimePlatform: {
                cpuArchitecture: CpuArchitecture.X86_64,
                operatingSystemFamily: OperatingSystemFamily.LINUX
            },
        },
    );

    // Add Container to Task Definition
    const container = taskDefinition.addContainer(
        _props.containerConfig.id,
        {
            image: ContainerImage.fromRegistry(_ecrName),
            containerName: _props.containerConfig.name,
            essential: true,
            portMappings: [
                {
                    containerPort: 8080,
                    protocol: Protocol.TCP
                },
            ],
            logging: new AwsLogDriver({
                streamPrefix: `${_props.containerConfig.name}-ecs-logs`
            })
        }
    );

    // Create VPC for ECS Cluster
    const vpc = new Vpc(this, `${_props.containerConfig.name}-vpc`, {});

    // Create ECS Cluster
    const cluster: Cluster = new Cluster(this, `${_props.containerConfig.name}-cluster`, {
        clusterName: _props.clusterName,
        vpc
    });

    // Create Application Load Balanced Fargate Service
    const applicationLoadBalancerFargateService: ApplicationLoadBalancedFargateService = new ApplicationLoadBalancedFargateService(
        this,
        `${_props.containerConfig.name}-service`,
        {
            serviceName: `${_props.containerConfig.name}-service`,
            cluster: cluster, // Required
            cpu: 256, // Default is 256
            desiredCount: 1, // Default is 1
            taskDefinition: taskDefinition,
            memoryLimitMiB: 512, // Default is 512
            publicLoadBalancer: true, // Default is false
            loadBalancerName: `${_props.containerConfig.name}-ALB`,
        },
    );

    return {
        cluster: cluster,
        service: applicationLoadBalancerFargateService
    };
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation of the Code:&lt;br&gt;
Method Signature:&lt;/p&gt;

&lt;p&gt;private createEcs(_props: ICreateEcs, _account: string, _region: string, _ecrName: string) { ... }&lt;br&gt;
This method takes in the configuration details for ECS (_props), AWS account ID (_account), AWS region (_region), and the ECR repository name (_ecrName) as arguments.&lt;br&gt;
Execution Role for ECS Task:&lt;/p&gt;

&lt;p&gt;We create a new Role named executionRole for the ECS task.&lt;br&gt;
assumedBy: Specifies that this role can be assumed by the ECS Fargate service.&lt;br&gt;
addToPolicy: Adds permissions to the role for tasks to interact with ECR and CloudWatch Logs.&lt;br&gt;
Fargate Task Definition:&lt;/p&gt;

&lt;p&gt;We define a FargateTaskDefinition that specifies the properties of our ECS task.&lt;br&gt;
executionRole: Specifies the execution role for the task.&lt;br&gt;
runtimePlatform: Specifies the CPU architecture and operating system family.&lt;br&gt;
Adding Container to Task Definition:&lt;/p&gt;

&lt;p&gt;We add a container to the task definition.&lt;br&gt;
image: Specifies the Docker image for our Nest.js application, retrieved from the ECR repository.&lt;br&gt;
containerName: Sets the name of the container.&lt;br&gt;
portMappings: Maps the container port to the host port for communication.&lt;br&gt;
Creating VPC for ECS Cluster:&lt;/p&gt;

&lt;p&gt;We create a new Amazon VPC (Virtual Private Cloud) for our ECS cluster.&lt;br&gt;
This VPC provides an isolated environment for our ECS resources.&lt;br&gt;
Creating ECS Cluster:&lt;/p&gt;

&lt;p&gt;We create an ECS cluster within the VPC.&lt;br&gt;
clusterName: Specifies the name of the ECS cluster.&lt;br&gt;
vpc: Specifies the VPC in which the cluster will be created.&lt;br&gt;
Creating Application Load Balanced Fargate Service:&lt;/p&gt;

&lt;p&gt;We create an Application Load Balanced Fargate service.&lt;br&gt;
serviceName: Specifies the name of the ECS service.&lt;br&gt;
cluster: Specifies the ECS cluster to which the service will be deployed.&lt;br&gt;
cpu: Specifies the CPU units for the Fargate tasks.&lt;br&gt;
desiredCount: Specifies the number of Fargate tasks to run.&lt;br&gt;
taskDefinition: Specifies the task definition to use for the service.&lt;br&gt;
memoryLimitMiB: Specifies the memory limit for the Fargate tasks.&lt;br&gt;
publicLoadBalancer: Specifies whether the load balancer should be public or private.&lt;br&gt;
loadBalancerName: Specifies the name of the load balancer for the service.&lt;br&gt;
Return:&lt;/p&gt;

&lt;p&gt;Finally, we return an object containing the created cluster and the service for further usage in our application.&lt;br&gt;
By executing this code, we create an AWS ECS cluster and an ECS Fargate service for our Nest.js application. The Fargate service will run our containerized application, ensuring scalability and high availability. Additionally, an Application Load Balancer (ALB) is set up to distribute incoming traffic across our Fargate tasks.&lt;/p&gt;

&lt;p&gt;With the ECS cluster and Fargate service in place, we now have a robust and scalable infrastructure to deploy our Nest.js application. Our continuous integration pipeline, combined with ECS Fargate, offers a seamless deployment process that automates building, testing, and deploying our application updates.&lt;/p&gt;

&lt;p&gt;We are now ready to witness the full power of AWS CodePipeline and ECS Fargate as we deploy our Nest.js application effortlessly. Let's proceed to the final steps of our deployment journey!&lt;/p&gt;
&lt;h3&gt;
  
  
  Fourth Step : Automating Service Updates with AWS CodePipeline :
&lt;/h3&gt;

&lt;p&gt;In the final step of our deployment process, we will automate the updating of our ECS Fargate service whenever a new version of our Nest.js application is built and ready for deployment. This automation will ensure seamless updates to our running service without manual intervention.&lt;/p&gt;

&lt;p&gt;In this step, we define the attachDeployAction method within our EcsApplicationConstruct class. This method is responsible for creating a CodeBuild project to update the ECS Fargate service with the latest version of our container image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;attachDeployAction(pipeline: Pipeline, buildOutput: Artifact, cluster: Cluster, service: ApplicationLoadBalancedFargateService) {
    // Create CodeBuild Project for Updating Service
    const updateTaskDefinition = new PipelineProject(this, `UpdateServiceProject`, {
        buildSpec: BuildSpec.fromObject({
            version: '0.2',
            phases: {
                build: {
                    commands: [
                        `aws ecs update-service --cluster ${cluster.clusterName} --service ${service.service.serviceArn} --force-new-deployment`
                    ],
                },
            },
        }),
    });

    // Add Permissions to Update Task Definition
    updateTaskDefinition.addToRolePolicy(new PolicyStatement({
        resources: ["*"],
        actions: ['ecs:*'],
        effect: Effect.ALLOW
    }));

    // Add Stage to CodePipeline for Updating Service
    pipeline.addStage({
        stageName: "UpdateService",
        actions: [new CodeBuildAction({
            actionName: 'UpdateService',
            project: updateTaskDefinition,
            input: buildOutput,
        })]
    });
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation of the Code:&lt;br&gt;
Method Signature:&lt;/p&gt;

&lt;p&gt;attachDeployAction(pipeline: Pipeline, buildOutput: Artifact, cluster: Cluster, service: ApplicationLoadBalancedFargateService) { ... }&lt;br&gt;
This method takes in the CodePipeline pipeline, buildOutput artifact, ECS cluster, and service as arguments.&lt;br&gt;
Create CodeBuild Project for Updating Service:&lt;/p&gt;

&lt;p&gt;We create a new PipelineProject named updateTaskDefinition for updating the ECS Fargate service.&lt;br&gt;
buildSpec: Specifies the build specifications as a set of commands.&lt;br&gt;
In this case, we run the command to update the ECS service with the new container image.&lt;br&gt;
--force-new-deployment: Ensures that the ECS service is updated with the latest changes.&lt;br&gt;
Add Permissions to Update Task Definition:&lt;/p&gt;

&lt;p&gt;We add permissions to the updateTaskDefinition to allow it to interact with ECS.&lt;br&gt;
This policy grants necessary permissions for updating the ECS service.&lt;br&gt;
Add Stage to CodePipeline for Updating Service:&lt;/p&gt;

&lt;p&gt;We add a new stage named "UpdateService" to the CodePipeline.&lt;br&gt;
This stage will trigger the updateTaskDefinition to update the ECS Fargate service.&lt;br&gt;
The buildOutput artifact from the previous stage is used as input for this stage.&lt;br&gt;
Return:&lt;/p&gt;

&lt;p&gt;There is no explicit return in this method, as the changes are directly applied to the provided pipeline.&lt;br&gt;
Deployment Automation in Action&lt;br&gt;
With this setup, whenever a new version of our Nest.js application is built and stored in the Amazon ECR repository, the CodePipeline will automatically trigger the "UpdateService" stage. This stage will execute the updateTaskDefinition, which in turn updates the ECS Fargate service with the latest container image.&lt;/p&gt;

&lt;p&gt;This automation ensures that our ECS Fargate service is always running the latest version of our Nest.js application without manual intervention. Any updates to our codebase will flow seamlessly from the development environment to the production environment.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Congratulations! We have successfully set up a complete deployment pipeline for our Nest.js application using AWS CodePipeline and ECS Fargate. From automatically building our application, storing it in Amazon ECR, deploying it to an ECS Fargate service, to updating the running service with new versions, we have covered the entire deployment lifecycle.&lt;/p&gt;

&lt;p&gt;By leveraging these powerful AWS services and infrastructure as code with AWS CDK, we have established an efficient, scalable, and automated deployment process for our Nest.js application. This not only saves time but also ensures consistency and reliability in our deployments.&lt;/p&gt;

&lt;p&gt;With our deployment pipeline in place, we can focus more on developing and improving our Nest.js application, knowing that the deployment process is taken care of. We hope this article has been informative and helpful in your journey to deploying applications on AWS. Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>nestjs</category>
      <category>ecs</category>
      <category>awscdk</category>
    </item>
    <item>
      <title>Effortless Deployment: Next.js Static WebApp on S3 &amp; CloudFront with AWS CDK and CodePipeline Tutorial</title>
      <dc:creator>Mostafa Dekmak</dc:creator>
      <pubDate>Sat, 23 Mar 2024 00:23:58 +0000</pubDate>
      <link>https://dev.to/dkmostafa/effortless-deployment-nextjs-static-webapp-on-s3-cloudfront-with-aws-cdk-and-codepipeline-tutorial-37i1</link>
      <guid>https://dev.to/dkmostafa/effortless-deployment-nextjs-static-webapp-on-s3-cloudfront-with-aws-cdk-and-codepipeline-tutorial-37i1</guid>
      <description>&lt;h3&gt;
  
  
  Github Link : &lt;a href="https://github.com/dkmostafa/dev-samples"&gt;https://github.com/dkmostafa/dev-samples&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;AWS-CDK Infrastructure Branch : &lt;a href="https://github.com/dkmostafa/dev-samples/tree/infra"&gt;https://github.com/dkmostafa/dev-samples/tree/infra&lt;/a&gt;&lt;br&gt;
NextJs Application Branch : &lt;a href="https://github.com/dkmostafa/dev-samples/tree/next-js-static-branch"&gt;https://github.com/dkmostafa/dev-samples/tree/next-js-static-branch&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Goal :
&lt;/h2&gt;

&lt;p&gt;The objective of this article is to employ CI/CD to deploy a Next.js Single Page Application to an S3 bucket and host it via CloudFront.&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Modern web development often requires not just creating beautiful and functional applications but also ensuring they are deployed efficiently, securely, and with high performance. In this tutorial, we'll delve into the world of serverless architecture and infrastructure as code (IaC) with AWS CDK. We'll explore how to seamlessly deploy a Next.js static web application to an S3 bucket and serve it through CloudFront, Amazon's Content Delivery Network (CDN).&lt;/p&gt;

&lt;p&gt;AWS CDK (Cloud Development Kit) provides a powerful way to define cloud infrastructure using familiar programming languages such as TypeScript or Python. Combined with AWS CodePipeline, a fully managed continuous integration and continuous delivery (CI/CD) service, we can automate the deployment process, making it easy to update and scale our web applications.&lt;/p&gt;

&lt;p&gt;Whether you're new to serverless architecture or looking to optimize your Next.js app deployment, this guide will walk you through each step, from setting up your project to automating the deployment pipeline. By the end, you'll have a robust, performant, and easily scalable Next.js web application running on AWS infrastructure.&lt;/p&gt;

&lt;p&gt;Let's dive in!&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS account with necessary permissions&lt;/li&gt;
&lt;li&gt;Node.js installed on your local machine&lt;/li&gt;
&lt;li&gt;Basic understanding of Next.js, AWS CDK, and CodePipeline&lt;/li&gt;
&lt;li&gt;Install AWS-CDK cli on your local machine : &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html"&gt;https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  First Step : Setting Up S3 Bucket and CloudFront Distribution with AWS CDK:
&lt;/h2&gt;

&lt;p&gt;Creating an S3 Bucket and CloudFront Distribution with AWS CDK&lt;br&gt;
In this first step, we'll use AWS CDK to set up an S3 bucket and a CloudFront distribution to host our Next.js static web application. This will lay the foundation for serving our app with high availability and low latency through Amazon's Content Delivery Network (CDN).&lt;/p&gt;

&lt;p&gt;We'll be using a custom construct, S3CloudFrontStaticWebHostingConstruct, to simplify the creation of these resources. This construct encapsulates the logic for creating an S3 bucket with specific configurations and a CloudFront distribution that points to this bucket.&lt;/p&gt;

&lt;p&gt;Here's the TypeScript code for the construct:&lt;/p&gt;

&lt;p&gt;s3CloudFrontStaticWebHosting.construct.ts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Construct } from "constructs";
import { BlockPublicAccess, Bucket } from "aws-cdk-lib/aws-s3";
import { Duration, RemovalPolicy } from "aws-cdk-lib";
import { AllowedMethods, Distribution, SecurityPolicyProtocol, ViewerProtocolPolicy } from "aws-cdk-lib/aws-cloudfront";
import { S3Origin } from "aws-cdk-lib/aws-cloudfront-origins";

interface IS3BucketConfig {
    bucketId: string,
    bucketName: string,
}

interface ICloudFrontDistribution {
    cloudFrontId: string,
}

export interface IS3CloudFrontStaticWebHostingConstructProps {
    s3BucketConfig: IS3BucketConfig,
    cloudFrontDistribution: ICloudFrontDistribution
}

export class S3CloudFrontStaticWebHostingConstruct extends Construct {
    constructor(scope: Construct, id: string, _props: IS3CloudFrontStaticWebHostingConstructProps) {
        super(scope, id);

        const bucket = this.createS3Bucket(_props.s3BucketConfig);
        const cloudFrontDistribution: Distribution = this.createCloudFrontDistribution(_props.cloudFrontDistribution, bucket);
    }

    private createS3Bucket(_props: IS3BucketConfig) {
        const bucket: Bucket = new Bucket(this, _props.bucketId, {
            bucketName: _props.bucketName,
            blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
            publicReadAccess: false,
            removalPolicy: RemovalPolicy.DESTROY,
        });

        return bucket;
    }

    private createCloudFrontDistribution(_props: ICloudFrontDistribution, s3Bucket: Bucket) {
        const distribution = new Distribution(this, _props.cloudFrontId, {
            defaultBehavior: {
                allowedMethods: AllowedMethods.ALLOW_GET_HEAD_OPTIONS,
                compress: true,
                origin: new S3Origin(s3Bucket),
                viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
            },
            defaultRootObject: "index.html",
            errorResponses: [
                {
                    httpStatus: 403,
                    responseHttpStatus: 403,
                    responsePagePath: "/error.html",
                    ttl: Duration.minutes(30),
                },
            ],
            minimumProtocolVersion: SecurityPolicyProtocol.TLS_V1_2_2019,
        });

        return distribution;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explanation:
&lt;/h3&gt;

&lt;p&gt;We define an interface IS3BucketConfig to specify the properties needed for our S3 bucket, such as bucketId and bucketName.&lt;br&gt;
Similarly, the ICloudFrontDistribution interface defines the properties required for the CloudFront distribution, such as cloudFrontId.&lt;br&gt;
The IS3CloudFrontStaticWebHostingConstructProps interface combines these configurations into a single object that we'll pass to our construct.&lt;br&gt;
Inside the S3CloudFrontStaticWebHostingConstruct constructor, we create an S3 bucket using the createS3Bucket method, which configures the bucket with properties like blockPublicAccess and removalPolicy.&lt;br&gt;
We then create a CloudFront distribution using the createCloudFrontDistribution method. This distribution specifies the behavior for requests, default root object, error responses, and minimum TLS protocol version.&lt;br&gt;
This construct simplifies the process of creating an S3 bucket and CloudFront distribution with the necessary configurations for hosting a Next.js static web application. In the next steps, we'll integrate this construct into our CDK stack and continue building our deployment pipeline.&lt;/p&gt;

&lt;p&gt;Stay tuned for the next step where we'll explore integrating this construct into our AWS CDK stack!&lt;/p&gt;
&lt;h2&gt;
  
  
  Second Step: Establishing the CodePipeline with AWS CDK for Deploying Our Next.js Application:
&lt;/h2&gt;

&lt;p&gt;In this stage, we'll construct an AWS CodePipeline to retrieve our source code from GitHub, build it using our buildspec.yml file, and then deploy it to an S3 bucket. Following the deployment to the S3 bucket, we'll proceed to invalidate the CloudFront cache to ensure that the latest updates to our application are reflected.&lt;/p&gt;

&lt;p&gt;Code : &lt;/p&gt;

&lt;p&gt;s3CloudFrontStaticWebHosting.construct.ts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; private buildingS3BucketPipeline(_props:IPipelineConfig,webSiteS3Bucket:Bucket,cloudFrontDistribution:Distribution):Pipeline {

        const outputSources: Artifact = new Artifact();
        const outputWebsite: Artifact = new Artifact();

        const sourceAction: GitHubSourceAction = new GitHubSourceAction({
            actionName: 'GitHub_Source',
            owner: _props.githubConfig.owner,
            repo: _props.githubConfig.repo,
            oauthToken: SecretValue.secretsManager(_props.githubConfig.oAuthSecretManagerName),
            output: outputSources,
            branch: _props.githubConfig.branch,
            trigger: GitHubTrigger.WEBHOOK
        })
        const buildAction: CodeBuildAction = new CodeBuildAction({
            actionName: "BuildWebsite",
            project: new PipelineProject(this, "BuildWebsite", {
                projectName: "BuildWebsite",
                buildSpec: BuildSpec.fromSourceFilename(_props.buildSpecLocation),
                environment: {
                    buildImage: LinuxBuildImage.STANDARD_7_0
                }
            }),
            input: outputSources,
            outputs: [outputWebsite],
        });
        const deploymentAction : S3DeployAction =new S3DeployAction({
            actionName:"S3WebDeploy",
            input: outputWebsite,
            bucket: webSiteS3Bucket,
            runOrder:1,
        });
        const invalidateBuildProject = new PipelineProject(this, `InvalidateProject`, {
            buildSpec: BuildSpec.fromObject({
                version: '0.2',
                phases: {
                    build: {
                        commands:[
                            'aws cloudfront create-invalidation --distribution-id ${CLOUDFRONT_ID} --paths "/*"',
                        ],
                    },
                },
            }),
            environmentVariables: {
                CLOUDFRONT_ID: { value: cloudFrontDistribution.distributionId },
            },
        });
        const distributionArn = `arn:aws:cloudfront::${_props.account}:distribution/${cloudFrontDistribution.distributionId}`;
        invalidateBuildProject.addToRolePolicy(new PolicyStatement({
            resources: [distributionArn],
            actions: [
                'cloudfront:CreateInvalidation',
            ],
        }));
        const invalidateCloudFrontAction: CodeBuildAction = new CodeBuildAction({
            actionName: 'InvalidateCache',
            project: invalidateBuildProject,
            input: outputWebsite,
            runOrder: 2,
        });

        const pipeline: Pipeline = new Pipeline(this,_props.pipelineId , {
            pipelineName: _props.pipelineName,
            stages:[
                {
                    stageName:"Source",
                    actions:[sourceAction],
                },
                {
                    stageName:"Build",
                    actions:[buildAction],
                },
                {
                    stageName:"S3Deploy",
                    actions:[deploymentAction,invalidateCloudFrontAction],
                }
            ]
        });

        return pipeline;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explanation :
&lt;/h3&gt;

&lt;p&gt;his code defines a function buildingS3BucketPipeline that constructs an AWS CodePipeline for deploying a Next.js application to an S3 bucket and invalidating the CloudFront cache to reflect updates. Here's a breakdown of the key components:&lt;/p&gt;

&lt;p&gt;Artifacts:&lt;/p&gt;

&lt;p&gt;Two artifacts are created:&lt;br&gt;
outputSources: Artifact to hold the source code retrieved from GitHub.&lt;br&gt;
outputWebsite: Artifact to store the built website files.&lt;br&gt;
GitHub Source Action:&lt;/p&gt;

&lt;p&gt;A GitHub source action (sourceAction) is defined using the GitHubSourceAction class.&lt;br&gt;
It fetches the source code from a GitHub repository.&lt;br&gt;
Parameters include the GitHub repository owner, repository name, OAuth token for authentication, output artifact (outputSources), branch to monitor for changes, and the trigger type (GitHubTrigger.WEBHOOK).&lt;br&gt;
Build Action with CodeBuild:&lt;/p&gt;

&lt;p&gt;A build action (buildAction) is created using the CodeBuildAction class.&lt;br&gt;
It builds the application using an AWS CodeBuild project.&lt;br&gt;
Parameters include the action name, the CodeBuild project (defined inline with PipelineProject), input artifact (outputSources from the GitHub source action), and output artifact (outputWebsite to store built files).&lt;br&gt;
The CodeBuild project is configured with a build specification file (_props.buildSpecLocation) and an environment (using LinuxBuildImage.STANDARD_7_0).&lt;br&gt;
Deployment Action to S3:&lt;/p&gt;

&lt;p&gt;A deployment action (deploymentAction) using the S3DeployAction class is defined.&lt;br&gt;
It deploys the built website to the S3 bucket (webSiteS3Bucket).&lt;br&gt;
Parameters include the action name, input artifact (outputWebsite), target S3 bucket (webSiteS3Bucket), and the run order.&lt;br&gt;
CloudFront Cache Invalidation:&lt;/p&gt;

&lt;p&gt;A CodeBuild project (invalidateBuildProject) is created to invalidate the CloudFront cache.&lt;br&gt;
The build project runs a set of commands, including the aws cloudfront create-invalidation command, which invalidates all objects in the CloudFront distribution (${CLOUDFRONT_ID} represents the CloudFront distribution ID).&lt;br&gt;
The project is configured with an environment variable (CLOUDFRONT_ID) containing the distribution ID of our CloudFront distribution (cloudFrontDistribution.distributionId).&lt;br&gt;
IAM Policy for Cache Invalidation:&lt;/p&gt;

&lt;p&gt;An IAM policy statement is defined to allow the CodeBuild project to create invalidations for the CloudFront distribution.&lt;br&gt;
The policy statement grants permissions to the CodeBuild project to perform the cloudfront:CreateInvalidation action on the specific CloudFront distribution (distributionArn).&lt;br&gt;
Invalidation CloudFront Action with CodeBuild:&lt;/p&gt;

&lt;p&gt;Another CodeBuild action (invalidateCloudFrontAction) is defined to execute the cache invalidation process.&lt;br&gt;
This action uses the invalidateBuildProject to run the cache invalidation commands.&lt;br&gt;
The input artifact for this action is outputWebsite, and it has a run order of 2, ensuring it runs after the deployment action.&lt;br&gt;
Creating the CodePipeline:&lt;/p&gt;

&lt;p&gt;Finally, the function constructs the CodePipeline using the Pipeline class.&lt;br&gt;
The pipeline is defined with stages:&lt;br&gt;
Source Stage: Includes the sourceAction to fetch code from GitHub.&lt;br&gt;
Build Stage: Contains the buildAction to build the application.&lt;br&gt;
S3Deploy Stage: Combines the deploymentAction to deploy to S3 and invalidateCloudFrontAction to invalidate the CloudFront cache.&lt;br&gt;
Each stage represents a step in the pipeline, ensuring the source is fetched, the application is built, and then deployed with cache invalidation.&lt;br&gt;
This entire setup automates the process of updating and deploying the Next.js application from source code changes in GitHub to a live, updated website hosted on Amazon S3 with CloudFront CDN. This pipeline ensures efficient and reliable delivery of the application to end-users while handling the necessary steps for deployment and cache management seamlessly.&lt;/p&gt;
&lt;h2&gt;
  
  
  Last step : Setting Up the Next.js Application:
&lt;/h2&gt;

&lt;p&gt;Given that we're utilizing our Next.js Application as a Single Page Application (SPA), it's crucial to adhere to the following guidelines:&lt;/p&gt;

&lt;p&gt;We'll avoid incorporating any server-side components within our application.&lt;br&gt;
It's necessary to implement the 'use client' directive at the top of our pages. &lt;br&gt;
We should include the following configuration in our next.config.mjs file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
/** @type {import('next').NextConfig} */
const nextConfig = {
    output: 'export',
    reactStrictMode:false
};

export default nextConfig;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4 - add the following buildspec.yml file to our root&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: latest
    commands:
      - echo  Nodejs version  ` node --version `
      - echo Installing dependency...
      - npm install -g next
      - npm install -g typescript
      - cd nextjs-static-webapp-sample
      - npm install
      - echo Dependency Installed

  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - npm run build
      - echo Next App is built successfully
artifacts:
  files:
    - '**/*'
  base-directory: 'nextjs-static-webapp-sample/out'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>nextjs</category>
      <category>awscdk</category>
      <category>pipeline</category>
    </item>
  </channel>
</rss>
