<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sujithra</title>
    <description>The latest articles on DEV Community by Sujithra (@sujikathir).</description>
    <link>https://dev.to/sujikathir</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sujikathir"/>
    <language>en</language>
    <item>
      <title>Building a Smart Relocation Question Answering Bot using Hugging Face: Personalized Answers at Your Fingertips 🌍🤖💡</title>
      <dc:creator>Sujithra</dc:creator>
      <pubDate>Thu, 29 Jun 2023 10:36:35 +0000</pubDate>
      <link>https://dev.to/sujikathir/building-a-smart-relocation-question-answering-bot-using-hugging-face-personalized-answers-at-your-fingertips-1aeg</link>
      <guid>https://dev.to/sujikathir/building-a-smart-relocation-question-answering-bot-using-hugging-face-personalized-answers-at-your-fingertips-1aeg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction: 🌟🌈
&lt;/h2&gt;

&lt;p&gt;In an increasingly globalized world, relocation has become a common phenomenon. Whether it's for work, study, or personal reasons, moving to a new country can be a daunting process. 😱🌍 From finding the right housing to understanding visa procedures and local transportation, there are numerous questions that individuals have when planning to relocate. &lt;/p&gt;

&lt;p&gt;But fear not! 🚀🌟 To simplify this process and provide personalized assistance, we have embarked on an exciting journey of building a Smart Relocation Question Answering Bot! 🤖💪 In this blog post, we will take you on a joyride through the details of this project, exploring its architecture, features, future plans, challenges, and our valuable learnings. Buckle up! It's going to be a fun ride! 🚀🎢&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview: 🏰📚
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U9dc3TJW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bbidjlq290pxcenowve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U9dc3TJW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bbidjlq290pxcenowve.png" alt="Image description" width="675" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Smart Relocation Question Answering Bot is built on the shoulders of giants, utilizing cutting-edge techniques from the field of natural language processing (NLP). 🤖💡 At its core, the bot harnesses the power of transformers, a groundbreaking architecture in the realm of NLP. 🌟🔮 &lt;/p&gt;

&lt;p&gt;Transformers, like magical wizards 🧙‍♂️, are capable of understanding the nuances of human speech and generating insightful responses. Our bot is powered by the illustrious BERT (Bidirectional Encoder Representations from Transformers) model, which stands as a titan among transformers. It captures deep contextual information and provides accurate answers to your burning relocation questions! 🔥🌟&lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset: 📊🔍
&lt;/h2&gt;

&lt;p&gt;To train our Smart Relocation Question Answering Bot, we have meticulously curated a comprehensive dataset that encompasses a wide range of relocation aspects. &lt;/p&gt;

&lt;p&gt;📚🌍 It's like having an entire library 📚🌐 at your disposal, with information on housing, insurance, transportation, education, visa procedures, banking details, and more! 🏠🚗📚 Our dataset is a treasure trove of knowledge, fueling the learning process of our bot. It serves as the foundation for training BERT, allowing it to understand and respond to your relocation queries with remarkable accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Code Magic: ✨💻🔮
&lt;/h2&gt;

&lt;p&gt;Let's dive into the mystical world of code that powers our Smart Relocation Question Answering Bot. Here's a sneak peek into some of the fascinating parts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from transformers import BertForQuestionAnswering, BertTokenizer

# Load the pre-trained BERT model and tokenizer
model = BertForQuestionAnswering.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In this code snippet, we start by importing the necessary modules from the transformers library, namely BertForQuestionAnswering and BertTokenizer. These modules allow us to use the pre-trained BERT model for question-answering tasks and the associated tokenizer to preprocess the input.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We then proceed to load the pre-trained BERT model and tokenizer using the from_pretrained() method. The bert-base-uncased parameter specifies the base BERT model with uncased tokens, which means the text is lowercased during tokenization.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Process the input question and context
question = "What are the housing options in Berlin?"
context = "Berlin offers a variety of housing options ranging from apartments in the city center to suburban houses. The rental prices vary based on location and amenities."

inputs = tokenizer.encode_plus(question, context, add_special_tokens=True, return_tensors='pt')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Next, we define a sample question and context relevant to housing options in Berlin. The question represents the user's query, while the context provides the necessary information for answering the question.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To process the input, we use the tokenizer's encode_plus() method. It tokenizes the question and context, adds special tokens like [CLS] and [SEP] to mark the beginning and separation of the input, and returns the encoded inputs as tensors.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get the model's predicted answer span
start_scores, end_scores = model(**inputs).start_logits, model(**inputs).end_logits
start_index = torch.argmax(start_scores)
end_index = torch.argmax(end_scores) + 1
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs['input_ids'][0][start_index:end_index]))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In this part, we pass the encoded inputs through the BERT model using model(**inputs). This returns the start and end logits, representing the probabilities of each token being the start and end of the answer span within the context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By applying torch.argmax() to the start and end logits, we obtain the indices of the most probable start and end positions of the answer span.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, we convert the predicted answer span back into human-readable text using the tokenizer's convert_tokens_to_string() and convert_ids_to_tokens() methods.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, the above code snippet showcases the magic of BERT in action! 🎩✨ It demonstrates how we can use the pre-trained BERT model and tokenizer from the Hugging Face library to process a question and context, and then extract the answer span. By applying this code snippet iteratively, we enable our bot to provide accurate answers to your relocation queries. It's like witnessing the convergence of code and intelligence, resulting in a remarkable solution! 💻🧠✨&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposed AWS Services: 💻☁️
&lt;/h2&gt;

&lt;p&gt;Our bot is backed by the power of Amazon Web Services (AWS) to provide you with a top-notch experience. Here's how our dream team of services works together: 🤝💡&lt;/p&gt;

&lt;p&gt;• The User Interface (UI) interacts with Amazon Lex. 🖥️🗣️&lt;/p&gt;

&lt;p&gt;• Amazon Lex communicates with AWS Lambda for processing and validation. 📞💪&lt;/p&gt;

&lt;p&gt;• AWS Lambda interacts with Amazon DynamoDB for data storage, Amazon SageMaker for machine learning tasks, and Amazon S3 for static content storage. 🗄️🤖🔍💾&lt;/p&gt;

&lt;p&gt;• And the star of the show, Amazon API Gateway, serves as the entry point for your bot, routing incoming requests to the appropriate Lambda function or SageMaker endpoint. 🚀🌟🚪🔀&lt;/p&gt;

&lt;h2&gt;
  
  
  Training and Learning: 🧠🔍📚
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---pmKEK5m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ur9zb6u7tmr9jmq67d50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---pmKEK5m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ur9zb6u7tmr9jmq67d50.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Training BERT is no small feat. It requires extensive computational power and a wealth of training data. In our case, we leveraged the Hugging Face library, a treasure chest of pre-trained transformer models and NLP tools. &lt;/p&gt;

&lt;p&gt;🤗📚 By utilizing their state-of-the-art BERT model, we were able to accelerate our progress significantly. With Hugging Face, we embarked on a thrilling journey of transfer learning, fine-tuning BERT on our relocation dataset. &lt;/p&gt;

&lt;p&gt;This process involved exposing BERT to our dataset and allowing it to learn the intricate patterns and nuances of relocation-related questions and answers. It's like BERT transformed into a relocation expert, absorbing the knowledge from our dataset! 🏰🌐💡&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Brainstorming Across Continents: Turning Ideas into Reality! 💭✨
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8MAhN8zo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xte4qov0uvzyutdwvrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8MAhN8zo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xte4qov0uvzyutdwvrr.png" alt="Image description" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Across continents and amidst busy schedules, our dedicated team members came together, meeting several times a week, to ignite creativity and propel our project forward. With vibrant brainstorming sessions and a virtual project board, we nurtured ideas, overcame challenges, and transformed obstacles into growth opportunities. Our meticulously crafted code now manifests our shared vision, as we embark on an exciting adventure to unravel the secrets of cutting-edge technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges: 🚧🤔⚡
&lt;/h2&gt;

&lt;p&gt;Building a Smart Relocation Question Answering Bot comes with its fair share of challenges. One major hurdle we faced was ensuring the availability and accuracy of the relocation data. Relocation-related information is dynamic and constantly evolving, making it crucial to keep our dataset up to date. &lt;/p&gt;

&lt;p&gt;Additionally, training BERT on such a vast dataset required substantial computational resources and time. We had to optimize our training pipeline and leverage distributed computing to overcome these challenges.&lt;/p&gt;

&lt;p&gt;Moreover, fine-tuning BERT to understand specific relocation queries required careful experimentation and hyperparameter tuning. But through perseverance and innovative problem-solving, we triumphed over these challenges! 💪🌟&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Plans: 🌟🚀🔮
&lt;/h2&gt;

&lt;p&gt;We believe in the limitless potential of our Smart Relocation Question Answering Bot, and we have exciting plans for the future! Here's a sneak peek into what's coming next: 🔍🔮🚀&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Expanded Coverage: While our bot currently focuses on Germany, we have ambitious plans to extend its coverage to more countries worldwide. From France to Japan, the world is our oyster, and we want to assist you wherever your relocation dreams take you! 🌍🌏🌐&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhanced Embassy Information: We're working hard to provide detailed embassy information for each country. 🏛️🗺️📞 Simply provide your zip code, and our bot will serve up valuable details about the nearest embassy. This feature will make visa procedures a breeze and help you navigate the administrative aspects of relocation with ease! ✈️📋🤝&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: 🌟🔑✨
&lt;/h2&gt;

&lt;p&gt;With the power of transformers, the wisdom of BERT, and the magic of Hugging Face, our Smart Relocation Question Answering Bot is ready to accompany you on your relocation journey! 🤖🌍💼 By harnessing the latest advancements in NLP and machine learning, we have created a personalized, interactive, and informative bot that will revolutionize your relocation experience. Say goodbye to confusion and hello to a seamless transition! 🚀🌈🏠🔑 So, are you ready to embark on this exciting adventure with our bot as your trusted guide? Let's make your relocation dreams come true, one question at a time! 🌍💪🔥🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  🌟 Meet the Exceptional Leaders of our Project! 🌟
&lt;/h2&gt;

&lt;p&gt;🚀 &lt;strong&gt;Arockia Nirmal Amala Doss&lt;/strong&gt; (Germany) - The Spark of Inspiration: Arockia Nirmal not only shared the initial idea behind this groundbreaking project but also served as the driving force that united and inspired our entire team. With his unwavering determination, Arockia Nirmal led us towards a common goal, ensuring that every step we took was infused with passion and purpose.&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;Sujithra Kathiravan&lt;/strong&gt; (USA) - A visionary with a knack for innovation, Sujithra contributed invaluable insights and spearheaded our project's strategic direction.&lt;/p&gt;

&lt;p&gt;🌍 &lt;strong&gt;Felix Vidal Gutierrez Morales&lt;/strong&gt; (Uruguay) - With his boundless enthusiasm and exceptional coding skills, Felix played a pivotal role in transforming ideas into tangible results.&lt;/p&gt;

&lt;p&gt;🔬 &lt;strong&gt;Endah Bongo-Awah&lt;/strong&gt; (Germany) - A brilliant mind with a passion for cutting-edge technologies, Endah's contributions brought depth and excellence to our project.&lt;/p&gt;

&lt;p&gt;Together, our incredible team embarked on an exhilarating journey during the AWS Community Builders Hackathon. Despite the challenges of distance and time zones, we united our talents, met regularly, and fueled each other's creativity. &lt;/p&gt;

&lt;p&gt;Curious to see the magic we've created? You can explore our extraordinary work on our GitHub repository:&lt;/p&gt;

&lt;p&gt;🔗 GitHub Repo: &lt;a href="https://github.com/fvgm-spec/CB_AWS_AI_Hackathon"&gt;Link to the GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join us in celebrating this incredible team, as we showcase the power of community, innovation, and the AWS platform. Together, we are redefining what's possible and leaving a lasting impact on the world of AI. 🌍🚀✨&lt;/p&gt;

</description>
      <category>huggingface</category>
      <category>hackathon</category>
      <category>teamawsbuilders</category>
      <category>transformerenthusiast</category>
    </item>
    <item>
      <title>Athena Basics</title>
      <dc:creator>Sujithra</dc:creator>
      <pubDate>Sun, 22 Jan 2023 19:37:46 +0000</pubDate>
      <link>https://dev.to/sujikathir/athena-basics-3egk</link>
      <guid>https://dev.to/sujikathir/athena-basics-3egk</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;What is Athena?&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
— &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;  Interactive query service for analysis of data stored in S3&lt;/li&gt;
&lt;li&gt;— Serverless avoiding setup of infrastructure&lt;/li&gt;
&lt;li&gt;— Provides automatic scaling of data volume in queries&lt;/li&gt;
&lt;li&gt;— Leverages column-based table creation for parallel processing&lt;/li&gt;
&lt;li&gt;— Cloud based in-memory query system&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Business Role for Athena&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
— &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;  User friendly query system for S3 data storage&lt;/li&gt;
&lt;li&gt;— Central metadata store architecture like Hive&lt;/li&gt;
&lt;li&gt;— Focuses on unstructured and semi-structured data stored in S3&lt;/li&gt;
&lt;li&gt;— Common examples of queried data include JSON, CSV,&lt;/li&gt;
&lt;li&gt;Apache Parquet, and Apache ORC large data files&lt;/li&gt;
&lt;li&gt;— Emphasis is on large capture data files like weblogs, IOT, and other external data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Creating Tables in Athena&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
—&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Athena creates tables using the Apache Hive Data Definition Language&lt;/li&gt;
&lt;li&gt;Hive is an open-source Big Data toolset for analytics&lt;/li&gt;
&lt;li&gt;Uses SQL compliant statements for table creation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Schema on Read&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
—&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Verifies data organization when a query is issued&lt;/li&gt;
&lt;li&gt;— Provides much faster loading as structure is not validated&lt;/li&gt;
&lt;li&gt;&lt;p&gt;— Multiple schemas serving different needs for the same data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Better option when the schema is not known at loading time&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Parallel Processing of Queries&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
—&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;   Parallel operations within an SQL Query&lt;/li&gt;
&lt;li&gt;— Concurrent users can access columns at the same time&lt;/li&gt;
&lt;li&gt;— Horizontal and vertical parallelization of a single query operation using multiple nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Governed Tables&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;— Governed Tables are tables formed within a data lake created by AWS Lake Formation&lt;/li&gt;
&lt;li&gt;— Similar to Managed Tables in Hive&lt;/li&gt;
&lt;li&gt;— When a governed table is dropped the table definition in the metastore and the data file is deleted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Iceberg Table&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
—&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  An Iceberg Table is an Apache open format table designed to capture a large analytics dataset&lt;/li&gt;
&lt;li&gt;— Manages a large collection of files as a table&lt;/li&gt;
&lt;li&gt;— Iceberg tables must be associated with an AWS Glue catalog&lt;/li&gt;
&lt;li&gt;— Must be created using the Parquet format in AWS&lt;/li&gt;
&lt;li&gt;— Drop table deletes the meta store and data file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Summary&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;— Athena is a serverless cloud based in-memory query service&lt;/li&gt;
&lt;li&gt;— Athena federated Query service&lt;/li&gt;
&lt;li&gt;— Uses a common metadata store architecture for table&lt;/li&gt;
&lt;li&gt;definitions&lt;/li&gt;
&lt;li&gt;— Uses common data stored in JSON, ORC, and Parquet formats&lt;/li&gt;
&lt;li&gt;— Uses standard SQL query language&lt;/li&gt;
&lt;li&gt;— Supports external, governed, and iceberg tables&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Encrypt your S3 Object</title>
      <dc:creator>Sujithra</dc:creator>
      <pubDate>Mon, 31 May 2021 08:28:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/encrypt-your-s3-object-1al</link>
      <guid>https://dev.to/aws-builders/encrypt-your-s3-object-1al</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu5noh8xw56wbz9zxcdt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu5noh8xw56wbz9zxcdt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea behind Encrypting your S3 object is that you upload objects onto Amazon S3 and these are servers of AWS so you may want to make sure that these objects are not accessible for example, if someone gets into the Amazon servers or you wanna make sure you get adhere to some security standards set up by your company. As such, Amazon gives you four methods to encrypt objects in Amazon S3. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first one is called &lt;strong&gt;SSE S3&lt;/strong&gt;. This is to encrypt S3 objects, using keys handled and managed by AWS. &lt;/li&gt;
&lt;li&gt;The second one is &lt;strong&gt;SSE-KMS&lt;/strong&gt;. It leverages AWS key management service to manage your encryption keys. &lt;/li&gt;
&lt;li&gt;The third one is &lt;strong&gt;SSE-C&lt;/strong&gt;. It is used when you manage your own encryption keys.&lt;/li&gt;
&lt;li&gt;Finally &lt;strong&gt;Client-side encryption&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now we're going to do a deep dive on all of those so don't worry. &lt;/p&gt;

&lt;h3&gt;
  
  
  SSE-S3
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw156sy87ii7fx509nkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw156sy87ii7fx509nkt.png" alt="SSE S3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an encryption where the keys used to encrypt the data are handled and managed by Amazon S3. The object is going to be encrypted server side. SSE means server-side encryption and it's type of encryption is AES-256, which is in logarithm. So for this to upload an object and set the SSE S3 encryption you must set a header called X-amz-server-side-encryption AES-256. X-amz stands for X Amazon and its server-side encryption is AES-256. This is how you remember the name of the header. Let's have a look in detail. We have an object and it is un-encrypted. We have it written out and we want to upload it into Amazon history and perform some SSE-S3 encryption. So for this we're going to upload the objects onto Amazon S3. You can use the HTTP protocol or the HTTPS protocol and you can add the header that we said, the X-amz-server-side-encryption AES256. Because of this header, Amazon S3 knows that it should apply its own S3 managed data key and using the S3 Managed Key and the object, some encryption will happen and the object will be stored encrypted into your Amazon S3 buckets. Very simple, but here in this instance the data key is entirely owned and managed by Amazon S3.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSE-KMS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5yxfgrvu8iaqbc9tpmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5yxfgrvu8iaqbc9tpmy.png" alt="SSE KMS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;KMS is a key management service which is an encryption service. SSE-KMS is used when you have your encryption keys are handled and managed by the KMS service. Why will you use KMS over SSEs which are free? Well, it gives you control over who has access to what keys and also gives you an audit trail. Each object is going to be again encrypted server side and for this to work, we must set the header X Amazon service side encryption to be a value AWS KMS. So the idea is exactly the same because it is server-side encryption. We have the object we uploaded using HTTP and then we have the header. Amazon S3 knows to apply the KMS customer master key you have defined on top of it and using this customer master key and your object there's some encryption that will happen and the file will be stored in your S3 buckets under the SSE-KMS encryption scheme.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSE-C
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vjwfy7krtetj4qfo43k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vjwfy7krtetj4qfo43k.png" alt="SSE - C"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It stands for server-side encryption using the keys that you provide yourself outside of AWS. So in this case, Amazon S3 does not store the encryption key you provide so it will absolutely have to use it because it needs to do encryption after which that key will be discarded. Now, to transmit the data into AWS, you must use HTTPS because you're going to send a secret to AWS and so you must have encryption in transit. Encryption key must be provided in the HTTP headers for every HTTP request made because it's going to be discarded every single time. So we have the object and we want to have it encrypted in Amazon S3 but we want to provide ourselves the client side data key to perform the encryption. So we send both of these things over HTTPS so it's an encrypted connection between you, the clients and Amazon S3. The data key is in the header so therefore Amazon S3 received the exact same object and the client provided data key. And then again, it is server-side encryption so Amazon S3 will perform at the incorporeal using these two things and store the encrypted object into your S3 buckets. If you wanted to retrieve that file from Amazon S3 using SSE-C you would need to provide the clients' side data key that was used before. It requires a lot more management on your end because you manage to do the data keys and AWS in general does not know which data keys you have used. So it's a bit more involved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clients Side Encryption.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo69kvz20yx400aqohvlw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo69kvz20yx400aqohvlw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The encryption is performed by the clients. You as a client encrypt the object before uploading it into Amazon S3. Some client libraries can help you do this for example, the Amazon S3 encryption clients is a way to perform that Client Side Encryption and as I said, clients must encrypt data before sending it to S3. And then in case you receive data that is encrypted using client side encryption, then you are solely responsible for decrypting the data yourself as well so you need to make sure you have the right key available.&lt;/p&gt;

&lt;p&gt;So, as I said, in Client Side Encryption the customer entirely manages the keys and the encryption cycle. Let's have an example. Amazon S3 this time is just the buckets where it's not doing any encryption for us because it is Client-Side Encryption not Server Side encryption. And so in the clients we'll use Encryption SDK for example, the S3 Encryption SDK will provide the object and our client's side data key. The encryption will happen client side so the object is going to be fully encrypted on the client side and then we are going to just upload that already encrypted object into Amazon S3.&lt;/p&gt;

&lt;p&gt;Okay. So that's the four types of encryptions hopefully that makes sense.&lt;/p&gt;

&lt;h3&gt;
  
  
  Encryption in Transit - SSL TLS
&lt;/h3&gt;

&lt;p&gt;Encryption in flight is also called SSL TLS because it uses SSL and TLS certificates. It exposes HTTP endpoint that is not encrypted and it exposes HTTPS end point which is encrypted and provide what's called encryption in flight which relies on SSL and TLS certificates. So you're free to use the end points you want but if you use the console, for example you would be using HTTPS and most clients would by the way use HTTPS endpoint by default and so if you're using HTTPS, that means that there is data transfer between your clients and Amazon S3 is going to be fully encrypted and that's, what's called encryption in transit. And one thing to know is that in case you're using SSE-C so server-side encryption and the key is provided by your clients then HTTPS is mandatory.&lt;/p&gt;

</description>
      <category>security</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
