<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Collins Kiplimo</title>
    <description>The latest articles on DEV Community by Collins Kiplimo (@ckiplimo).</description>
    <link>https://dev.to/ckiplimo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ckiplimo"/>
    <language>en</language>
    <item>
      <title>Java Stack: Unveiling LIFO Magic</title>
      <dc:creator>Collins Kiplimo</dc:creator>
      <pubDate>Tue, 15 Aug 2023 09:28:07 +0000</pubDate>
      <link>https://dev.to/ckiplimo/java-stack-unveiling-lifo-magic-17em</link>
      <guid>https://dev.to/ckiplimo/java-stack-unveiling-lifo-magic-17em</guid>
      <description>&lt;p&gt;A stack is a linear data structure that follows the principle of Last In First Out (LIFO). This means the last element inserted inside the stack is removed first.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Concepts and Principles:&lt;/em&gt;&lt;br&gt;
Visualize a stack as a vertical arrangement of elements, where you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Push:&lt;br&gt;
Add an element to the top of the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pop:&lt;br&gt;
Remove an element from the top of the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IsEmpty:&lt;br&gt;
Check if the stack is empty.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IsFull:&lt;br&gt;
Check if the stack is full.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Peek:&lt;br&gt;
View the value of the top element without removal.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Working of the Stack:&lt;/em&gt;&lt;br&gt;
In Java programming, we use a pointer called top to keep track of the top element. Upon initialization, top is set to -1 to indicate an empty stack. When an element is pushed, top is incremented, and for popping, it is decremented. These operations are carried out with checks to avoid overflow or underflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Stack Implementation in Java:&lt;/em&gt;&lt;br&gt;
Below is a practical implementation of a stack in Java:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
public class Stack {
    private int maxSize;
    private int[] stackArray;
    private int top;

    public Stack(int size) {
        maxSize = size;
        stackArray = new int[maxSize];
        top = -1;
    }

    public boolean isEmpty() {
        return top == -1;
    }

    public boolean isFull() {
        return top == maxSize - 1;
    }

    public void push(int value) {
        if (!isFull()) {
            stackArray[++top] = value;
            System.out.println("Pushed item: " + value);
        } else {
            System.out.println("Stack is full. Cannot push.");
        }
    }

    public int pop() {
        if (!isEmpty()) {
            return stackArray[top--];
        } else {
            System.out.println("Stack is empty. Cannot pop.");
            return -1; 
        }
    }

    public int peek() {
        if (!isEmpty()) {
            return stackArray[top];
        } else {
            System.out.println("Stack is empty. Cannot peek.");
            return -1; 
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Time Complexity:&lt;/em&gt;&lt;br&gt;
Stack operations using arrays have a constant time complexity (O(1)) for both push and pop, making them efficient and suitable for various applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Applications:&lt;/em&gt;&lt;br&gt;
Stacks prove their versatility across various domains:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Expression Evaluation:&lt;br&gt;
Stacks assist compilers in evaluating expressions by converting them into postfix notation for efficient processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Function Calls:&lt;br&gt;
Method calls and recursion in programming languages often utilize stacks to manage execution contexts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Browser History:&lt;br&gt;
Browsers utilize stacks to facilitate backward navigation through visited URLs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>datastructures</category>
      <category>algorithms</category>
      <category>java</category>
      <category>i</category>
    </item>
    <item>
      <title>Optimizing API Performance: Exploring Effective Rate Limiting Algorithms</title>
      <dc:creator>Collins Kiplimo</dc:creator>
      <pubDate>Wed, 28 Jun 2023 14:28:01 +0000</pubDate>
      <link>https://dev.to/ckiplimo/optimizing-api-performance-exploring-effective-rate-limiting-algorithms-3dc1</link>
      <guid>https://dev.to/ckiplimo/optimizing-api-performance-exploring-effective-rate-limiting-algorithms-3dc1</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
In the realm of API rate limiting, choosing the right algorithm is crucial for maintaining system stability, preventing abuse, and ensuring fair resource distribution. This article delves into several popular rate limiting algorithms and highlights their advantages and disadvantages. By understanding these algorithms, developers can make informed decisions to implement effective rate limiting strategies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Leaky Bucket: Managing Requests with a Queue&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Leaky Bucket algorithm offers a straightforward and intuitive approach to rate limiting. By utilizing a queue, incoming requests are appended to the end of the queue and processed at a regular interval or through first-in, first-out (FIFO) processing. This algorithm provides control over the request rate by discarding or "leaking" additional requests when the queue is full. We explore the simplicity and limitations of the Leaky Bucket algorithm.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Token Bucket: Controlling Access with Tokens&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Token Bucket algorithm employs the concept of a bucket filled with tokens. Each incoming request requires the consumption of a token from the bucket to proceed. If no tokens are available, the request is refused, and the requester must retry later. This algorithm allows for the refreshing of tokens over a defined time period, ensuring fair distribution of resources. We examine the Token Bucket algorithm's token-based approach and its effectiveness in rate limiting scenarios.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Fixed Window: Rate Limiting within Fixed Time Windows&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Fixed Window algorithm tracks request rates within fixed time windows. By using a window size of n seconds, each incoming request increments a counter for the respective window. If the counter exceeds a predefined threshold, the request is discarded. We explore the simplicity and limitations of the Fixed Window algorithm and its ability to handle bursty traffic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Sliding Log: Dynamic Rate Limiting with Time-stamped Logs&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Sliding Log algorithm employs time-stamped logs to track and enforce rate limits. These logs are stored in a time-sorted hash set or table, with older logs discarded beyond a threshold. When a new request arrives, the algorithm calculates the sum of logs within a specified time range to determine the request rate. If the request exceeds the threshold rate, it is held. We delve into the flexibility and considerations of the Sliding Log algorithm.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Sliding Window: Balancing Performance and Precision&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Sliding Window algorithm combines elements of both the Fixed Window and Sliding Log algorithms to achieve a balance between processing cost and precise rate limiting. Similar to the Fixed Window algorithm, it tracks a counter for each fixed window. Additionally, it accounts for the weighted value of the previous window's request rate based on the current timestamp, enabling a smoother handling of traffic bursts. We analyze the advantages and trade-offs of the Sliding Window algorithm.&lt;/p&gt;

</description>
      <category>api</category>
      <category>algorithms</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Exploring Commonly Used Routing Algorithms for Efficient Network Traffic Distribution</title>
      <dc:creator>Collins Kiplimo</dc:creator>
      <pubDate>Wed, 28 Jun 2023 13:21:59 +0000</pubDate>
      <link>https://dev.to/ckiplimo/exploring-commonly-used-routing-algorithms-for-efficient-network-traffic-distribution-nmn</link>
      <guid>https://dev.to/ckiplimo/exploring-commonly-used-routing-algorithms-for-efficient-network-traffic-distribution-nmn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
Routing algorithms play a crucial role in efficiently distributing network traffic across various servers in a system. By intelligently selecting the most appropriate server for each request, these algorithms help optimize performance, improve resource utilization, and ensure a seamless user experience. In this article, we will explore several commonly used routing algorithms and delve into their key characteristics and applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Round-robin Routing Algorithm:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The round-robin algorithm is a simple and widely adopted approach for load balancing. In this method, requests are distributed to application servers in a sequential manner, following a rotating pattern. Each subsequent request is directed to the next server in the sequence, ensuring an equal distribution of traffic among all available servers. Round-robin routing works well when the servers have similar capabilities and workload requirements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weighted Round-robin Routing Algorithm:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building upon the basic round-robin technique, the weighted round-robin algorithm introduces the concept of assigning weights to servers. These weights reflect the varying compute and traffic handling capacities of different servers. By adjusting the weights, administrators can influence the distribution of traffic, ensuring that more capable servers handle a higher proportion of requests. This algorithm is often implemented through DNS records, allowing for dynamic adjustments as server capacities change.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Least Connections Routing Algorithm:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The least connections algorithm takes into account the current number of active connections on each server when making routing decisions. When a new request arrives, it is directed to the server with the fewest active connections at that moment. By considering the relative computing capacities of servers, this algorithm aims to evenly distribute traffic and prevent any single server from becoming overloaded.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Least Response Time Routing Algorithm:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The least response time algorithm focuses on selecting the server with the fastest response time and the fewest active connections. This approach combines two crucial factors to ensure optimal performance and minimal latency. By considering response times and connection loads simultaneously, the algorithm intelligently routes requests to the most efficient server at any given moment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Least Bandwidth Routing Algorithm:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In scenarios where network bandwidth is a critical factor, the least bandwidth routing algorithm becomes relevant. This method evaluates the traffic load in terms of megabits per second (Mbps) on each server. The request is then directed to the server with the least amount of traffic, aiming to balance the bandwidth utilization across the server pool. This algorithm is particularly useful in environments where data-intensive applications or media streaming services are involved.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hashing Routing Algorithm:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The hashing algorithm utilizes a predefined key, such as the client's IP address or the request URL, to distribute requests across servers. By applying a hashing function to the key, a consistent mapping is established between the key and a specific server. This approach ensures that requests with the same key are always directed to the same server, which can be advantageous in scenarios where session persistence or caching strategies are essential.&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>systemdesign</category>
      <category>networking</category>
    </item>
    <item>
      <title>Decoding Run Levels in Linux</title>
      <dc:creator>Collins Kiplimo</dc:creator>
      <pubDate>Wed, 17 May 2023 13:19:54 +0000</pubDate>
      <link>https://dev.to/ckiplimo/decoding-run-levels-in-linux-478k</link>
      <guid>https://dev.to/ckiplimo/decoding-run-levels-in-linux-478k</guid>
      <description>&lt;p&gt;Introduction:&lt;br&gt;
In the realm of Linux, run levels serve as vital tools for managing the operating system's behavior during startup and daily operations. They dictate which services and processes are active at different stages of the boot process. This beginner's guide aims to demystify run levels in Linux, unraveling their significance and practical usage.&lt;/p&gt;

&lt;p&gt;What are Run Levels?&lt;br&gt;
Run levels in Linux represent distinct operating states of the system. Each run level encompasses a specific configuration of services and processes that should be active at a given time. By controlling the run level, you can manage how your system behaves during startup and regular usage.&lt;/p&gt;

&lt;p&gt;Understanding Common Run Levels:&lt;br&gt;
In Linux, run levels are typically denoted by numbers ranging from 0 to 6, each indicating a specific state. Here's an overview of the commonly used run levels:&lt;/p&gt;

&lt;p&gt;Run Level 0 (Halt): This run level completely shuts down the system, powering off the machine.&lt;br&gt;
Run Level 1 (Single User): In this minimal state, only one user has access, providing a basic command-line interface for essential system maintenance tasks.&lt;br&gt;
Run Level 2 (Multi-User without Networking): Similar to run level 3 but without network services enabled.&lt;br&gt;
Run Level 3 (Multi-User with Networking): This run level allows multi-user mode with full network capabilities, facilitating simultaneous login for multiple users.&lt;br&gt;
Run Level 5 (Graphical User Interface): Activating this run level brings forth a visually appealing graphical desktop environment, offering an interactive interface for users to engage with the system.&lt;br&gt;
Run Level 6 (Reboot): This run level restarts the system, shutting down all processes and initiating a system reboot.&lt;br&gt;
Changing Run Levels:&lt;br&gt;
To switch between run levels on a Linux system, you can leverage commands like init or systemctl depending on your Linux distribution. For instance, to transition to run level 3 (multi-user with networking), employ the command sudo init 3 or sudo systemctl isolate multi-user.target.&lt;/p&gt;

&lt;p&gt;Configuration Files:&lt;br&gt;
Each run level's behavior is influenced by configuration files housed in specific directories. On older Linux systems, run level configurations are defined in the /etc/inittab file. On modern Linux distributions employing systemd, run levels are managed via systemd targets and unit files located in /etc/systemd/system/.&lt;/p&gt;

&lt;p&gt;Customizing Run Levels:&lt;br&gt;
System administrators possess the liberty to customize run levels to suit specific requirements. By modifying the run level configuration files, you can enable/disable specific services or even define new run levels featuring custom configurations.&lt;/p&gt;

&lt;p&gt;Run Levels vs. systemd Targets:&lt;br&gt;
Recent Linux distributions have transitioned to systemd as the default initialization system. systemd replaces traditional run levels with the concept of targets, which provide more fine-grained control over the system's behavior. However, understanding run levels remains crucial for service management and troubleshooting system issues.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Create a Docker MySQL Container and Link it with a Spring Boot Application</title>
      <dc:creator>Collins Kiplimo</dc:creator>
      <pubDate>Thu, 23 Mar 2023 10:23:37 +0000</pubDate>
      <link>https://dev.to/ckiplimo/how-to-create-a-docker-mysql-container-and-link-it-with-a-spring-boot-application-f35</link>
      <guid>https://dev.to/ckiplimo/how-to-create-a-docker-mysql-container-and-link-it-with-a-spring-boot-application-f35</guid>
      <description>&lt;h2&gt;
  
  
  Step 1: Create a Docker Network
&lt;/h2&gt;

&lt;p&gt;Create a Docker network to allow the containers to communicate with each other. You can create a network using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create --subnet=172.18.0.0/16 mynetwork
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a Docker network named "mynetwork" with a subnet of 172.18.0.0/16.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create a MySQL Container
&lt;/h2&gt;

&lt;p&gt;Create a MySQL container using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name mysql-db --ip 172.18.0.2 --network mynetwork -e MYSQL_ROOT_PASSWORD=root mysql:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a MySQL container named "mysql-db" with an IP address of 172.18.0.2 and attaches it to the "mynetwork" network. It also sets the root password for the MySQL server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Spring Boot Application
&lt;/h2&gt;

&lt;p&gt;Create a Spring Boot application using your preferred IDE or text editor. You can use the Spring Initializr to generate a new Spring Boot project with the necessary dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Configure Spring Boot Application
&lt;/h2&gt;

&lt;p&gt;In your Spring Boot application, add the following configuration to your application.properties file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring.datasource.url=jdbc:mysql://mysql-db:3306/testdb
spring.datasource.username=root
spring.datasource.password=root
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above configuration sets the database URL, username, password, and driver class name for the MySQL server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Build and Run the Spring Boot Application
&lt;/h2&gt;

&lt;p&gt;Build your Spring Boot application using Maven or Gradle. Then, run the application using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker run -p 8080:8080 --name spring-app --network mynetwork &amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command runs your Spring Boot application in a Docker container named "spring-app" and maps port 8080 of the container to port 8080 of the host machine.&lt;/p&gt;

&lt;p&gt;Note that you will need to replace  with the name of the Docker image you built for your Spring Boot application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Step 6: Test the Application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your web browser and navigate to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt;. If everything is configured correctly, you should see your Spring Boot application running and connected to the MySQL container.&lt;/p&gt;

</description>
      <category>spring</category>
      <category>mysql</category>
      <category>docker</category>
    </item>
    <item>
      <title>Harnessing the Power of QueryDSL for Data Filtering in Spring Boot</title>
      <dc:creator>Collins Kiplimo</dc:creator>
      <pubDate>Thu, 23 Mar 2023 08:34:06 +0000</pubDate>
      <link>https://dev.to/ckiplimo/harnessing-the-power-of-querydsl-for-data-filtering-in-spring-boot-1lle</link>
      <guid>https://dev.to/ckiplimo/harnessing-the-power-of-querydsl-for-data-filtering-in-spring-boot-1lle</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;QueryDSL is a framework that provides a type-safe way of writing queries for various data stores, including databases and search engines. Spring Boot is a popular framework for building web applications that integrates well with QueryDSL. In this article, we'll explore how to use QueryDSL with Spring Boot, including how to set up the necessary dependencies and plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Project
&lt;/h2&gt;

&lt;p&gt;To get started, we'll need to create a new Spring Boot project and add the necessary dependencies and plugins. We'll use Maven for this, but you could also use Gradle if you prefer.&lt;br&gt;
Next, open the pom.xml file and add the following dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        &amp;lt;dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;com.querydsl&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;querydsl-apt&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;com.querydsl&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;querydsl-jpa&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to add the QueryDSL Maven plugin to our pom.xml file. This plugin generates Q classes for our JPA entities, which we can then use to write type-safe queries with QueryDSL. Add the following plugin configuration to the build section of your pom.xml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;com.mysema.maven&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;apt-maven-plugin&amp;lt;/artifactId&amp;gt;
                &amp;lt;version&amp;gt;1.1.3&amp;lt;/version&amp;gt;
                &amp;lt;executions&amp;gt;
                    &amp;lt;execution&amp;gt;
                        &amp;lt;phase&amp;gt;generate-sources&amp;lt;/phase&amp;gt;
                        &amp;lt;goals&amp;gt;
                            &amp;lt;goal&amp;gt;process&amp;lt;/goal&amp;gt;
                        &amp;lt;/goals&amp;gt;
                        &amp;lt;configuration&amp;gt;
                            &amp;lt;outputDirectory&amp;gt;target/generated-sources/java&amp;lt;/outputDirectory&amp;gt;
                            &amp;lt;processor&amp;gt;com.querydsl.apt.jpa.JPAAnnotationProcessor&amp;lt;/processor&amp;gt;
                        &amp;lt;/configuration&amp;gt;
                    &amp;lt;/execution&amp;gt;
                &amp;lt;/executions&amp;gt;
            &amp;lt;/plugin&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This plugin configuration tells Maven to generate Q classes for our JPA entities using the JPAAnnotationProcessor. The generated classes will be placed in the target/generated-sources/java directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing a Query with QueryDSL
&lt;/h2&gt;

&lt;p&gt;Suppose we have a Spring Boot application that manages deliveries for a group of farmers. We want to write a query that filters deliveries based on various criteria such as transaction type, date range, farmer name, and cooperative ID.&lt;/p&gt;

&lt;p&gt;Here is an example of a method that uses QueryDSL to filter deliveries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public ListResponse filterDeliveries(TransactionType transactionType, LocalDate start, LocalDate end, String name,Long cooperativeId,int page,int perPage) {
    page = page - 1;
    Sort sort = Sort.by(Sort.Direction.ASC, "createdAt");
    Pageable pageable = PageRequest.of(page, perPage, sort);
    Page&amp;lt;DeliveryDto&amp;gt; deliveryPage=null;

    if (transactionType != null &amp;amp;&amp;amp; name !=null &amp;amp;&amp;amp; start != null &amp;amp;&amp;amp; end != null ) {
        QAccountTransaction qAccountTransaction = QAccountTransaction.accountTransaction;
        deliveryPage=transactionRepository.findBy(qAccountTransaction.transactionType.eq(transactionType).
                and(qAccountTransaction.farmer.name.containsIgnoreCase(name)).
                and(qAccountTransaction.createdAt.between(start.atStartOfDay(),
                        LocalTime.MAX.atDate(end))), q -&amp;gt; q.sortBy(sort).as(DeliveryDto.class).page(pageable));

        log.info("This is a list of  deliveries found {}", deliveryPage.getContent());
        return new ListResponse(deliveryPage.getContent(), deliveryPage.getTotalPages(), deliveryPage.getNumberOfElements(),
                deliveryPage.getTotalElements());
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we first create a Pageable object to define the pagination and sorting parameters for our query. We then use the QAccountTransaction class, which is a QueryDSL-generated class based on our entity AccountTransaction, to define our query criteria. We can chain multiple conditions using the and operator. We then use the transactionRepository to execute the query and return the results as a Page object.&lt;/p&gt;

&lt;p&gt;QueryDSL provides a type-safe, fluent API for building complex queries with ease. With QueryDSL, we can easily build dynamic queries that can be used with any database&lt;/p&gt;

</description>
      <category>java</category>
      <category>api</category>
      <category>spring</category>
    </item>
  </channel>
</rss>
