<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abraham Mekonnen</title>
    <description>The latest articles on DEV Community by Abraham Mekonnen (@abrish).</description>
    <link>https://dev.to/abrish</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abrish"/>
    <language>en</language>
    <item>
      <title>Optimizing Serverless Lambda with GraalVM Native Image</title>
      <dc:creator>Abraham Mekonnen</dc:creator>
      <pubDate>Tue, 24 Dec 2024 02:37:55 +0000</pubDate>
      <link>https://dev.to/abrish/optimizing-serverless-lambda-with-graalvm-native-image-3g3g</link>
      <guid>https://dev.to/abrish/optimizing-serverless-lambda-with-graalvm-native-image-3g3g</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Following the development of a scalable email-sending service using AWS SES, Spring Boot, and AWS Lambda, I set out to optimize its performance. The focus was to address the cold start latency and memory usage inherent to Java applications on AWS Lambda. To achieve this, I turned to GraalVM Native Image, a technology designed to compile Java applications into native executables. This article outlines the implementation and results of this optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why GraalVM Native Image?
&lt;/h2&gt;

&lt;p&gt;GraalVM Native Image compiles Java applications ahead of time (AOT) into standalone executables. By doing so, it eliminates the need for a JVM at runtime, resulting in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reduced Cold Start Times: Applications start almost instantly, a crucial factor for serverless environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lower Memory Usage: By stripping unnecessary components, it creates a lightweight application footprint.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These advantages make GraalVM an ideal solution for improving serverless application performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1. Project Setup
&lt;/h4&gt;

&lt;p&gt;I began with AWS’s pet-store-native project, which provides a reference implementation for converting Spring Boot 3 applications into GraalVM-native images. This served as the foundation for integrating native image capabilities into the email-sending service.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Adapting for ARM Architecture
&lt;/h4&gt;

&lt;p&gt;Since my environment uses an ARM-based architecture, the Dockerfile required modifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updated the base image to align with ARM.&lt;/li&gt;
&lt;li&gt;Configured the GraalVM compiler for ARM compatibility.
These changes ensured the native image was optimized for the target system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Runtime Configuration
&lt;/h4&gt;

&lt;p&gt;Creating a custom bootstrap file for the runtime environment is essential to ensure the proper initialization and startup of the application. This file defines the entry point for the Lambda function and initializes the runtime environment. It also provides flexibility in configuring application parameters, enabling seamless integration with AWS Lambda.&lt;/p&gt;

&lt;p&gt;I also enabled HTTP protocol support in the GraalVM Maven plugin and integrated the AWS Java Container for Spring Boot to handle API Gateway events. These configurations ensured the application could efficiently process HTTP requests and responses in its native image form.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Deploying the Application
&lt;/h4&gt;

&lt;p&gt;Using the AWS Serverless Application Model (SAM), I deployed the native image as a Lambda function. Key customizations included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switching from HTTP API Gateway to a standard API Gateway to enable API key-based authentication.&lt;/li&gt;
&lt;li&gt;Implementing usage plans for secure and scalable API access.
These adjustments not only enhanced security but also allowed better resource allocation for the function.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;The transition to GraalVM Native Image yielded significant improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cold Start Times: Reduced by eliminating JVM initialization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory Usage: Minimized due to the compact nature of native executables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance Scaling: Faster response times and better handling of concurrent requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Native image&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwvsz6uh76kmbkk2e0uh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwvsz6uh76kmbkk2e0uh.png" alt="Native Image compiled from Springboot3 using Graalvm" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SpringBoot3&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qibt2wf6smslz9w8fbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qibt2wf6smslz9w8fbf.png" alt="SpringBoot Application" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, the API Gateway integration provided robust control over access and usage, enabling the service to function as a secure and scalable endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Through this implementation, I gained a deeper understanding of the interplay between GraalVM, Spring Boot, and AWS Lambda. The process highlighted the importance of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimizing for specific architectures to maximize performance.&lt;/li&gt;
&lt;li&gt;Configuring runtime environments to balance flexibility and efficiency.&lt;/li&gt;
&lt;li&gt;Leveraging tools like AWS SAM for streamlined deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project reinforced the potential of GraalVM Native Image as a powerful optimization tool for serverless Java applications, offering a compelling path forward for enhancing performance and reducing costs in production environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Abrish-mokie/GraalVm-SpringBoot3-Mail-Sender-Lambda" rel="noopener noreferrer"&gt;GitHub Repo for this project&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/compute/re-platforming-java-applications-using-the-updated-aws-serverless-java-container/" rel="noopener noreferrer"&gt;Replatforming Java Applications with the Updated AWS Serverless Java Container&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws/serverless-java-container/wiki/Quick-start---Spring-Boot3" rel="noopener noreferrer"&gt;Quick Start Guide: Spring Boot 3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=sI-zXYLKzfk" rel="noopener noreferrer"&gt;GraalVM Native Image: Faster, Smarter, Leaner&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=YclrKfEUHrI" rel="noopener noreferrer"&gt;Going AOT: A Comprehensive Guide to GraalVM for Java Applications by Alina Yurenko | SpringIO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=8umoZWj6UcU" rel="noopener noreferrer"&gt;Going Native: Building Fast and Lightweight Spring Boot Applications with GraalVM by Alina Yurenko&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nativeimage</category>
      <category>springboot</category>
      <category>java</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Sending Emails with Spring Boot, AWS SES, and Serverless Lambda for Scalable Solutions</title>
      <dc:creator>Abraham Mekonnen</dc:creator>
      <pubDate>Thu, 19 Dec 2024 00:25:29 +0000</pubDate>
      <link>https://dev.to/abrish/sending-emails-with-spring-boot-aws-ses-and-serverless-lambda-for-scalable-solutions-2cpe</link>
      <guid>https://dev.to/abrish/sending-emails-with-spring-boot-aws-ses-and-serverless-lambda-for-scalable-solutions-2cpe</guid>
      <description>&lt;p&gt;In the course of developing a Next.js authentication project, I encountered the need to send verification emails. While there are many email-sending services available, most come with subscription fees or limited free-tier plans. To maintain control and reduce costs, I decided to build my own email sender using Spring Boot, which provides a powerful framework for Java-based backend development.&lt;/p&gt;

&lt;p&gt;Initially, I used Gmail’s SMTP server to send emails. While functional, it lacked the professionalism of domain-specific emails, particularly for production environments. My goal was to send emails using my own domain, hosted on AWS Route 53. This led me to leverage AWS Simple Email Service (SES), which provides an SMTP interface for seamless integration.&lt;/p&gt;

&lt;p&gt;In this article, I will outline my journey and provide an overview of the tools and resources I used to implement this solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AWS SES?
&lt;/h2&gt;

&lt;p&gt;AWS SES stands out as the most cost-effective option, offering the ability to send up to 100,000 emails for just $10, compared to estimated costs of around $8 for 5,000 emails with Bravo or approximately $20 for 75,000 emails with SendGrid. Along with its scalability, reliability, and support for domain-specific emails, AWS SES becomes the ideal choice for transactional and marketing needs. Its superior deliverability rates and seamless integration with other AWS services further enhance its value, making it a robust and budget-friendly solution for businesses aiming to optimize their email campaigns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up AWS SES
&lt;/h2&gt;

&lt;p&gt;To use AWS SES, you need an AWS account. Setting up SES involves domain verification and creating an SMTP user. While I won’t cover the full step-by-step process here, I recommend the following &lt;a href="https://www.youtube.com/watch?v=DicsLvjTkiQ" rel="noopener noreferrer"&gt;YouTube tutorial&lt;/a&gt;, which I found incredibly helpful. Additionally, refer to the official &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/setting-up.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for best practices and the most up-to-date setup instructions.&lt;/p&gt;

&lt;p&gt;Moving forward his tutorial guided me through:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verifying my domain in AWS SES.&lt;/li&gt;
&lt;li&gt;Configuring DNS records in AWS Route 53 for domain authentication (DKIM and SPF).&lt;/li&gt;
&lt;li&gt;Setting up SES in production mode (moving out of the SES sandbox).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Integrating AWS SES with Spring Boot
&lt;/h2&gt;

&lt;p&gt;After setting up AWS SES, the next step was to build an email sender in Spring Boot. For this, I referred to another excellent &lt;a href="https://www.youtube.com/watch?v=kLMUS0-PznE" rel="noopener noreferrer"&gt;YouTube tutorial&lt;/a&gt;. However, additional documentation can be found &lt;a href="https://docs.spring.io/spring-framework/reference/integration/email.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Steps in Implementation
&lt;/h2&gt;

&lt;p&gt;1.Spring Boot Dependencies&lt;br&gt;
Add the following dependencies to your pom.xml for email functionality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-boot-starter-mail&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Configuring SMTP Properties&lt;br&gt;
In the application.properties or application.yml, configure the SMTP settings for AWS SES:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  application:
    name:
      MailSender
  mail:
    host: email-smtp.&amp;lt;region&amp;gt;.amazonaws.com
    port: 587
    username: ${mail-username}
    password: ${mail-password}
    protocol: smtp
    properties:
      mail:
        debug: true
        smtp:
          auth: true
          starttls:
            enable: true
            required: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Building the Email Service&lt;br&gt;
To handle email sending, I created a service class. For email content, I utilized Mustache templates to design the email in HTML format, making it dynamic and visually appealing. The service simply injects the verification token into the template before sending the email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing with AWS Lambda
&lt;/h2&gt;

&lt;p&gt;After successfully implementing the email sender in Spring Boot, I decided to take it further by deploying the service as a Lambda function. Lambda provides serverless capabilities, reducing infrastructure overhead and costs.&lt;/p&gt;

&lt;p&gt;Steps to Deploy with AWS Lambda&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Base Project
I referred to the official AWS GitHub repository for serverless Java &lt;a href="https://github.com/aws/serverless-java-container" rel="noopener noreferrer"&gt;projects&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Deploying with AWS SAM
Using the Serverless Application Model (SAM) CLI, I packaged and deployed the Lambda function. The setup included an API Gateway with API key-based authentication to secure the endpoint.&lt;/li&gt;
&lt;li&gt;Performance Enhancements
While the initial deployment worked well, I sought to optimize the solution by:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Reducing memory consumption.&lt;/li&gt;
&lt;li&gt;Improving cold start times for faster execution and lower costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps: GraalVM Native Image
&lt;/h2&gt;

&lt;p&gt;To achieve even greater performance, my next goal is to use GraalVM Native Image. This technology compiles Java applications into native executables, eliminating the JVM’s startup time and significantly reducing memory usage. Stay tuned for my next article, where I’ll dive into this optimization and share how it further enhanced my email-sending service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By integrating AWS SES with Spring Boot and deploying it using AWS Lambda, I built a scalable, cost-efficient email-sending service tailored to my domain-specific needs. This project not only met the immediate requirement of sending verification emails but also provided a learning opportunity in serverless architecture and performance optimization.&lt;/p&gt;

&lt;p&gt;If you’re looking to implement a similar solution or have questions, feel free to reach out!&lt;/p&gt;

</description>
      <category>springboot</category>
      <category>lambda</category>
      <category>ses</category>
      <category>aws</category>
    </item>
    <item>
      <title>AI-Powered Resume Builder with Dynamic PDF Generation and Customization</title>
      <dc:creator>Abraham Mekonnen</dc:creator>
      <pubDate>Fri, 25 Oct 2024 17:29:48 +0000</pubDate>
      <link>https://dev.to/abrish/ai-powered-resume-builder-with-dynamic-pdf-generation-and-customization-1i2b</link>
      <guid>https://dev.to/abrish/ai-powered-resume-builder-with-dynamic-pdf-generation-and-customization-1i2b</guid>
      <description>&lt;p&gt;In today’s fast-paced, competitive job market, having a resume that stands out is crucial. To help professionals streamline this process, I developed an &lt;strong&gt;AI-powered resume management system&lt;/strong&gt; that allows users to create, customize, and optimize resumes based on their career objectives and job descriptions. The system integrates cutting-edge technologies to generate dynamic PDFs, giving users a polished, professional resume in seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Project Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The project is built using Spring Boot, with several key libraries and tools that enhance its functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;OpenPDF&lt;/strong&gt; for PDF generation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Spring Doc OpenAPI&lt;/strong&gt; for comprehensive API documentation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Flyway&lt;/strong&gt; for database migrations.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;PostgreSQL&lt;/strong&gt; for data persistence.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JPA (Java Persistence API)&lt;/strong&gt; for managing database entities.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Spring AI&lt;/strong&gt; to integrate with the OpenAI API for resume enhancements based on job descriptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core objective of this project was to create an intuitive system that allows users to manage every aspect of their resume by adding, deleting, or modifying sections such as skills, certifications, education, professional experience, and projects. Additionally, the integration of AI enables the system to analyze job descriptions and adjust the content of the resume accordingly, ensuring it aligns with specific job requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Steps Taken in Development&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Template Creation:&lt;/strong&gt; A professional resume template was designed using OpenPDF to serve as the foundation for the final PDF generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PDF Generation Logic:&lt;/strong&gt; A dedicated class was created to map the template to the resume sections, allowing the dynamic generation of a PDF document.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CRUD Operations:&lt;/strong&gt; A comprehensive management system was built to allow users to perform CRUD (Create, Read, Update, Delete) operations on their resume data, ensuring that every section can be easily modified.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Migrations with Flyway:&lt;/strong&gt; Implemented &lt;strong&gt;Flyway&lt;/strong&gt; to manage database migrations, making the system adaptable and easy to maintain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Integration:&lt;/strong&gt; Used &lt;strong&gt;Docker&lt;/strong&gt; to containerize the application and databases, allowing for easy deployment and scalability. This also ensures consistency across different environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Powered Resume Enhancement:&lt;/strong&gt; The project leverages &lt;strong&gt;Spring AI&lt;/strong&gt; to integrate with the OpenAI API. Users can submit their resume along with a job description, and the AI tailors the resume content, ensuring it highlights the most relevant skills and experiences. Here is a good &lt;a href="https://www.youtube.com/watch?v=yyvjT0v3lpY&amp;amp;list=PLZV0a2jwt22uoDm3LNDFvN6i2cAVU_HTH" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; video resource, I used to learn about Spring AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Image Creation:&lt;/strong&gt; The final application is containerized into a Docker image, allowing it to be easily accessed and deployed by others through &lt;strong&gt;Docker Hub&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to Use the System&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To generate a personalized, AI-powered resume, simply provide the following details before generating the PDF:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Details:&lt;/strong&gt; Basic information such as name, contact details, and other personal data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Descriptions:&lt;/strong&gt; A brief overview of your professional summary or objectives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skills:&lt;/strong&gt; A list of relevant technical and soft skills.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Certifications:&lt;/strong&gt; Any certifications that add value to your expertise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Education:&lt;/strong&gt; Details of your educational background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projects:&lt;/strong&gt; Key projects that showcase your experience and contributions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Professional Experience:&lt;/strong&gt; A detailed account of your previous job roles and accomplishments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt for OpenAI API:&lt;/strong&gt; Provide instructions on how the AI should edit and enhance the resume based on the job description, ensuring that the content is tailored to match the role’s requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before starting the Docker image, you’ll need to provide your &lt;strong&gt;OpenAI API key&lt;/strong&gt; as an environment variable to enable the AI-powered resume enhancement feature. Once all the necessary fields are filled out, the system generates a customized, professional resume in PDF format that’s ready for use.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Areas for Improvement&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While the current system is highly functional, there are areas that could be enhanced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt; Implementing more robust error-handling mechanisms to ensure smoother operation and better user experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit Testing:&lt;/strong&gt; Expanding unit tests to cover more scenarios and ensure system reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Future Plans&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda Deployment:&lt;/strong&gt; To improve scalability and reduce operational costs, I plan to deploy the application on &lt;strong&gt;AWS Lambda,&lt;/strong&gt; allowing for serverless operation and automatic scaling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GraalVM Integration:&lt;/strong&gt; Integrating &lt;strong&gt;GraalVM&lt;/strong&gt; to optimize the application’s performance, reducing its memory footprint and improving cold startup times, especially in cloud environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This AI-powered resume management system represents a step forward in how professionals can craft and optimize their resumes. By automating the process and allowing for easy customization, it saves time and ensures that users always present the best possible version of themselves to potential employers. With planned improvements and future deployments on AWS, this project will continue to evolve and meet the needs of modern professionals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Abrish-mokie/ResumeGenerator" rel="noopener noreferrer"&gt;GitHub-link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the &lt;a href="https://gist.github.com/Abrish-mokie/618a74e2ec6d35f7b86608a7a996c16a" rel="noopener noreferrer"&gt;docker-compose-template&lt;/a&gt; file for deploying the AI-Powered Resume Generator application, complete with a Postgres database and optional pgAdmin service. This setup uses environment variables to manage database credentials and integrates the OpenAI API for resume enhancements. Feel free to customize the values as needed for your deployment. Here is the &lt;a href="https://github.com/Abrish-mokie/ResumeGenerator" rel="noopener noreferrer"&gt;GitHub-repo&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Key Concepts to Understand Before Starting with Spring Boot and AI</title>
      <dc:creator>Abraham Mekonnen</dc:creator>
      <pubDate>Tue, 22 Oct 2024 20:21:46 +0000</pubDate>
      <link>https://dev.to/abrish/key-concepts-to-understand-before-starting-with-spring-boot-and-ai-1213</link>
      <guid>https://dev.to/abrish/key-concepts-to-understand-before-starting-with-spring-boot-and-ai-1213</guid>
      <description>&lt;p&gt;Before diving into the world of AI within Spring Boot, it's essential to grasp some fundamental concepts. Especially before referring to any Spring Boot AI documentation, it's helpful to understand these basics. In this guide, I'll explain these ideas in the way that I understand them, and I hope it proves useful to anyone reading.&lt;br&gt;
Let's start with the core technology behind many text-based AI chat tools, including image generation systems - LLMs (Large Language Models). According to IBM, "Large Language Models (LLMs) are a category of foundation models trained on vast amounts of data, enabling them to understand and generate natural language and other types of content for a wide range of tasks." So, how do we integrate them into projects using Spring Boot AI? Before answering that, let's discuss some important points about LLMs that will provide a deeper understanding.&lt;br&gt;
The first thing to know is that LLMs have a training data cutoff date. This means that the model is trained only up until a specific point in time, and it won't be aware of anything that happened after that date. To address this limitation, several solutions have been developed, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt Stuffing&lt;/li&gt;
&lt;li&gt;RAG (Retrieval Augmented Generation)&lt;/li&gt;
&lt;li&gt;Function Calling&lt;/li&gt;
&lt;li&gt;Fine-tuning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before diving into these solutions, let's talk about tokens. In the world of LLMs, tokens are essentially the currency. LLMs process tokens rather than raw words, so everything inputted has to be converted into tokens. Every prompt sent to an LLM gets converted into tokens, and there is a limit to how many tokens an LLM can handle in a single request.&lt;br&gt;
Now, let's explore the solutions mentioned earlier:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Prompt Stuffing:
&lt;/h2&gt;

&lt;p&gt;This involves adding relevant information along with the user's question. For example, if a user asks about the dosage of a specific medication, you include additional data about that medication in the prompt. This allows the LLM to refer to the provided information when formulating its response.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. RAG (Retrieval Augmented Generation):
&lt;/h2&gt;

&lt;p&gt;This technique helps overcome the token limit in LLMs. Suppose a user wants to ask a question about a 900-page book, but the LLM has a token limit of 90,000 tokens (approximately 120,000 tokens for a 900-page book). It's impossible to stuff the entire content into the prompt. Enter embedding, a method of converting digital content into vectors, which are then stored in a vector database. This type of database, unlike traditional ones, performs similarity searches. When a question is asked, relevant data is retrieved from the vector database using similarity search, and this data is used in the prompt to help the LLM generate an answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Function Calling:
&lt;/h2&gt;

&lt;p&gt;In this approach, the LLM is provided with several functions it can call upon to answer user queries. When the LLM encounters a question it cannot answer with its trained data, it calls an appropriate function to fetch the necessary information. The response from the function is then used to answer the question.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Fine-Tuning:
&lt;/h2&gt;

&lt;p&gt;Fine-tuning involves training an already pre-trained LLM for a specific role or use case. This technique is mainly used by data scientists and, as of this writing, is not typically necessary when working with Spring Boot AI projects.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Key Concepts of Large Language Models (LLMs) before delving into spring AI</title>
      <dc:creator>Abraham Mekonnen</dc:creator>
      <pubDate>Thu, 12 Sep 2024 22:29:34 +0000</pubDate>
      <link>https://dev.to/abrish/key-concepts-to-understand-before-starting-with-spring-boot-and-ai-1aj</link>
      <guid>https://dev.to/abrish/key-concepts-to-understand-before-starting-with-spring-boot-and-ai-1aj</guid>
      <description>&lt;p&gt;Before diving into the world of AI within Spring Boot, it's essential to grasp some fundamental concepts. Especially before referring to any Spring Boot AI documentation, it's helpful to understand these basics. In this guide, I'll explain these ideas in the way that I understand them, and I hope it proves useful to anyone reading.&lt;br&gt;
Let's start with the core technology behind many text-based AI chat tools, including image generation systems - LLMs (Large Language Models). According to IBM, "Large Language Models (LLMs) are a category of foundation models trained on vast amounts of data, enabling them to understand and generate natural language and other types of content for a wide range of tasks." So, how do we integrate them into projects using Spring Boot AI? Before answering that, let's discuss some important points about LLMs that will provide a deeper understanding.&lt;br&gt;
The first thing to know is that LLMs have a training data cutoff date. This means that the model is trained only up until a specific point in time, and it won't be aware of anything that happened after that date. To address this limitation, several solutions have been developed, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt Stuffing&lt;/li&gt;
&lt;li&gt;RAG (Retrieval Augmented Generation)&lt;/li&gt;
&lt;li&gt;Function Calling&lt;/li&gt;
&lt;li&gt;Fine-tuning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before diving into these solutions, let's talk about tokens. In the world of LLMs, tokens are essentially the currency. LLMs process tokens rather than raw words, so everything inputted has to be converted into tokens. Every prompt sent to an LLM gets converted into tokens, and there is a limit to how many tokens an LLM can handle in a single request.&lt;br&gt;
Now, let's explore the solutions mentioned earlier:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Prompt Stuffing:
&lt;/h4&gt;

&lt;p&gt;This involves adding relevant information along with the user's question. For example, if a user asks about the dosage of a specific medication, you include additional data about that medication in the prompt. This allows the LLM to refer to the provided information when formulating its response.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. RAG (Retrieval Augmented Generation):
&lt;/h4&gt;

&lt;p&gt;This technique helps overcome the token limit in LLMs. Suppose a user wants to ask a question about a 900-page book, but the LLM has a token limit of 90,000 tokens (approximately 120,000 tokens for a 900-page book). It's impossible to stuff the entire content into the prompt. Enter embedding, a method of converting digital content into vectors, which are then stored in a vector database. This type of database, unlike traditional ones, performs similarity searches. When a question is asked, relevant data is retrieved from the vector database using similarity search, and this data is used in the prompt to help the LLM generate an answer.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Function Calling:
&lt;/h4&gt;

&lt;p&gt;In this approach, the LLM is provided with several functions it can call upon to answer user queries. When the LLM encounters a question it cannot answer with its trained data, it calls an appropriate function to fetch the necessary information. The response from the function is then used to answer the question.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Fine-Tuning:
&lt;/h4&gt;

&lt;p&gt;Fine-tuning involves training an already pre-trained LLM for a specific role or use case. This technique is mainly used by data scientists and, as of this writing, is not typically necessary when working with Spring Boot AI projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, integrating AI into Spring Boot requires a solid grasp of the foundational concepts behind Large Language Models (LLMs) and their limitations. By understanding how LLMs process tokens, and recognizing their inherent training data cutoff, developers can leverage various techniques like Prompt Stuffing, RAG, Function Calling, and Fine-Tuning to enhance their models' performance. While some solutions like RAG and embedding are crucial for overcoming token limits, others like Function Calling allow LLMs to extend beyond their training data. With this knowledge in hand, you'll be better prepared to navigate the complexities of building AI-powered systems within Spring Boot, ensuring more efficient and effective AI integrations.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
