<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tambe Salome</title>
    <description>The latest articles on DEV Community by Tambe Salome (@giftcup).</description>
    <link>https://dev.to/giftcup</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/giftcup"/>
    <language>en</language>
    <item>
      <title>Contributing to OCaml-Tiff So Far - Writing TIFF Files</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Tue, 24 Feb 2026 13:26:20 +0000</pubDate>
      <link>https://dev.to/giftcup/contributing-to-ocaml-tiff-so-far-writing-tiff-files-45an</link>
      <guid>https://dev.to/giftcup/contributing-to-ocaml-tiff-so-far-writing-tiff-files-45an</guid>
      <description>&lt;p&gt;I have been working with OCaml and contributing to the &lt;a href="https://github.com/geocaml/ocaml-tiff" rel="noopener noreferrer"&gt;OCaml TIFF library&lt;/a&gt; since the 8th of December, 2025. While applying for the internship, I submitted what we envisaged as a timeline to complete the project at hand. My proposed timeline looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Week 1-4 Orientation and Setup: Learning OCaml concepts using the Real World OCaml book, having a walkthrough on the project, and learning more about geospatial data and how it's managed in TIFF files.&lt;/li&gt;
&lt;li&gt;Week 5-6: Refactoring the code into low-level and high-level libraries&lt;/li&gt;
&lt;li&gt;Week 7-8: Building in cache support to enable writing TIFF files&lt;/li&gt;
&lt;li&gt;Week 9-10: Add advanced support for Geospatial data&lt;/li&gt;
&lt;li&gt;Week 11-12: Continuous work on the project,  especially if goals were not attained during the internship scope&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I came up with this seemingly perfect plan to contribute to this project, I didn't yet have a full grasp of what I needed to know. So, over time, I had to adjust my approach to meet the project's goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it all played out
&lt;/h2&gt;

&lt;p&gt;As per the original plan, I spent the first few weeks learning OCaml by reading the Real World OCaml book and the &lt;a href="https://cs3110.github.io/textbook/cover.html" rel="noopener noreferrer"&gt;OCaml Programming&lt;/a&gt; textbook. To avoid getting stuck in a cycle of continuous learning and reading, my mentor, Patrick Ferris, recommended that I go through the project and just read code while learning. &lt;/p&gt;

&lt;p&gt;As someone new to the programming language, it might be difficult to understand anything at first, but the goal in itself is not to understand the entire project, but to become a little more familiar with how data flows and is manipulated throughout the project. &lt;/p&gt;

&lt;h3&gt;
  
  
  Getting familiar with the project
&lt;/h3&gt;

&lt;p&gt;A good place to start when working on a new project is with the test files, and that is what I did. &lt;br&gt;
The project's benchmark tests were initially used to measure the runtime of functions that read TIFF files using the Unix library, an interface to the Unix file system; however, the project also utilizes OCaml's &lt;a href="https://github.com/ocaml-multicore/eio/blob/main/README.md" rel="noopener noreferrer"&gt;Eio library&lt;/a&gt;. The Eio library is better suited for concurrent programs, and it provides a higher-level, cross-platform API. Since we used two backends, I updated the benchmark tests to compare the runtimes when reading the TIFF files with the Unix and Eio backends; PR: &lt;a href="https://github.com/geocaml/ocaml-tiff/pull/56" rel="noopener noreferrer"&gt;Add support for eio library&lt;/a&gt;.&lt;br&gt;
Currently, the only way to know the contents of a TIFF file after reading it with the OCaml tiff library is through the tests. To effectively write and add new tests for the TIFF files, I used &lt;a href="https://github.com/cgohlke/tifffile" rel="noopener noreferrer"&gt;Python's tifffile library&lt;/a&gt; to generate TIFF files with specific features and then wrote new tests that assert that those values are read correctly by the ocaml-tiff library. The next step was to make sure ocaml-tiff interoperates with Raven. &lt;a href="https://github.com/raven-ml/raven" rel="noopener noreferrer"&gt;Raven&lt;/a&gt; provides OCaml with scientific computing capabilities and makes it easy to manipulate data. I swapped out the use of the owl-base library for Raven's Nx library in the test [PR: &lt;a href="https://github.com/geocaml/ocaml-tiff/pull/57/changes" rel="noopener noreferrer"&gt;Change the use of owl-base to nx in tests&lt;/a&gt;]. &lt;br&gt;
The pluggable backends in &lt;a href="https://ocaml.org/p/nx/latest" rel="noopener noreferrer"&gt;Nx&lt;/a&gt; allow implementation for different hardware. It also provides better performance compared to Owl and is an equivalent of Python's Numpy. Reading the documentation for the various libraries helped me understand how they worked better and guided me on what to implement.  I also found the &lt;a href="https://discuss.ocaml.org/" rel="noopener noreferrer"&gt;OCaml Discourse&lt;/a&gt; forum very useful for finding answers to my questions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Writing a TIFF File
&lt;/h3&gt;

&lt;p&gt;TIFF is a tag-based file format for storing and interchanging raster images. Digital images come in two formats, raster and vector. Raster images are produced when scanning or photographing an object and are generally what you think about when you think of images.&lt;br&gt;
To effectively write a TIFF file, it is important to understand the file structure. I read through the &lt;a href="https://www.itu.int/itudoc/itu-t/com16/tiff-fx/docs/tiff6.pdf" rel="noopener noreferrer"&gt;TIFF specification&lt;/a&gt; docs to get familiar with the different parts of the TIFF file, the kind of data stored in it, and what each part means.&lt;/p&gt;

&lt;p&gt;The approach I used to write a TIFF file was to first successfully use an existing TIFF file to create a new one.&lt;/p&gt;

&lt;p&gt;The first part of a TIFF file is the Image File Header. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bytes 0-1: a string that specifies the file's byte order: "II" for little-endian and "MM" for big-endian. &lt;/li&gt;
&lt;li&gt;A magic number used to identify the file as a TIFF file. 42 is used for a normal TIFF file, and 43 for Big TIFFs. If the file is a Big TIFF file, this value will occupy 6 bytes; otherwise, 2 bytes.&lt;/li&gt;
&lt;li&gt;The offset for the first Image File Directory(IFD), that is, the location in the file where the IFD is found, is the last thing stored in the header. 8 bytes are used in a Big TIFF, and 4 bytes for a simple TIFF.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the File module, I added a new definition for the writer, &lt;em&gt;File.wo&lt;/em&gt;, similar to that used for reading the TIFF.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type wo = file_offset:Optint.Int63.t -&amp;gt; Cstruct.t list -&amp;gt; unit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The definition specifies that to write to a file, the user needs to pass the file offset, that is, the location in the file at which the writing is to start, and a buffer containing the data to be written at that location.&lt;/p&gt;

&lt;p&gt;Since the header is the first thing in the file, its offset is always zero. So, to write the header, I determined all the data that had to be stored in the header, stored it in a buffer, and then wrote that buffer to the file at offset=0.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let write_header wo header =
  let buf = Cstruct.create 16 in

  Cstruct.blit_from_string
    (match header.byte_order with Endian.Little -&amp;gt; "II" | Endian.Big -&amp;gt; "MM")
    0 buf 0 2;

  (match header.kind with
  | Tiff -&amp;gt;
      Endian.set_uint16 ~offset:2 header.byte_order buf 42;
      Endian.set_uint32 ~offset:4 header.byte_order buf
        (header.offset |&amp;gt; Optint.Int63.to_int32)
  | Bigtiff -&amp;gt;
      Endian.set_uint16 ~offset:2 header.byte_order buf 43;
      Endian.set_uint64 ~offset:8 header.byte_order buf
        (header.offset |&amp;gt; Optint.Int63.to_int64));

  wo ~file_offset:Optint.Int63.zero [ buf ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also defined functions in the Unix and Eio module that'll be used for writing. Since ocaml-tiff uses Unix and an Eio backend, defining &lt;em&gt;File.wo&lt;/em&gt; provides an easy struct required by the various backends.&lt;/p&gt;

&lt;p&gt;TIFF files are tag-based, and an IFD entry is used to store those tags and their values. The entry contains: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tag: A unique number used to identify the entry&lt;/li&gt;
&lt;li&gt;Field: A number indicating the datatype of the entry's value&lt;/li&gt;
&lt;li&gt;Count: Number of values in the entry&lt;/li&gt;
&lt;li&gt;Value/offset: Can contain the actual data if it is less than 4 bytes for a TIFF and 8 bytes for a Big TIFF. If the data is larger, it'll contain its location in the file instead.
Possible values for the entry tag and field are found in the TIFF specs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, I was using the IFD entries read from one file to write to the new file. Writing an entry whose value was not immediately stored in its IFD entry was quite challenging when copying the IFD to the new file. In such cases, I had to write the entry, then read the entry's data at the offset specified in the entry and write it to the file at that offset. It doesn't sound as complicated now, but it was a hassle when I was trying to figure it out :)&lt;/p&gt;

&lt;p&gt;IFD entries are separate from the actual image data. The data is stored in strips, and an important IFD entry is the &lt;strong&gt;StripOffset&lt;/strong&gt;, which contains the various locations of the strips of data. The image data must not come immediately after the IFD entries, so the offset values are used to read and write the data accurately. The StripByteCount is also used to determine the length of a strip at an offset location.&lt;/p&gt;

&lt;p&gt;I used the same idea I described for writing the header to write the IFD entries and the image data to the file. The  complete implementation for duplicating a TIFF file can be seen in this &lt;a href="https://github.com/geocaml/ocaml-tiff/pull/59" rel="noopener noreferrer"&gt;PR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final step is to be able to make a TIFF file from scratch, which will be the discussion for the next post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Important to Note - Gotcha Moments
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Most times, the StripOffset value would not fit into an IFD entry since it's larger than the required size. At first, I thought writing the values of strip offsets meant writing the image data; however, it's just the location of the actual image data. That means writing the StripOffset entry actually means writing at an offset the offsets of data😂&lt;/li&gt;
&lt;li&gt;Optional arguments are best put in front; Never put your optional arguments at the end.&lt;/li&gt;
&lt;li&gt;Endianness specifies the byte order in the file; least significant byte to the most significant byte for little-endian, and most significant byte to least significant byte for big-endian. The file is either a Big TIFF file or a TIFF file, depending on the magic number in the header and not on the endian values.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opensource</category>
      <category>beginners</category>
      <category>ocaml</category>
    </item>
    <item>
      <title>About Me - An Outreachy Blog</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Mon, 22 Dec 2025 13:06:45 +0000</pubDate>
      <link>https://dev.to/giftcup/about-me-an-outreachy-blog-1k75</link>
      <guid>https://dev.to/giftcup/about-me-an-outreachy-blog-1k75</guid>
      <description>&lt;p&gt;Hi, I am Tambe Salome, and I am an &lt;a href="https://www.outreachy.org/" rel="noopener noreferrer"&gt;Outreachy&lt;/a&gt; Intern! In this blog, I will be talking about my journey as a software developer and how I found myself as an Outreachy intern in the &lt;a href="https://ocaml.org/" rel="noopener noreferrer"&gt;OCaml&lt;/a&gt; organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brief Introduction
&lt;/h2&gt;

&lt;p&gt;Again, I am Tambe (tam-bear) Salome. I am a 22-year-old girl from Buea, Cameroon, a country in Central Africa. I graduated from the University of Buea in 2024 with a Bachelor of Engineering in Computer Engineering. I like building secure software and contributing to tech communities. I have been a volunteer for the Google Developer Groups Buea since 2020. &lt;/p&gt;

&lt;h2&gt;
  
  
  Tambe the Explorer🔭
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Outreachy Attempts
&lt;/h3&gt;

&lt;p&gt;I first applied to be an Outreachy intern in 2021 when I just started learning how to code with C. I used the book C Programming: A Modern Approach and really loved it. It gave me solid programming foundations. I was accepted for the contribution phase, but quickly became overwhelmed by the available projects. This led me to be unable to make any contributions during this period. &lt;br&gt;
I also didn't seek guidance early enough and was trying to figure everything out on my own.&lt;br&gt;
I've applied to Outreachy almost every other year since then, but was never accepted for the contribution phase, until this year, when I was, and then I got in :)!&lt;/p&gt;

&lt;h3&gt;
  
  
  To Code or Not to Code
&lt;/h3&gt;

&lt;p&gt;I started with the C programming language, but had difficulties finding where I could use it extensively to build really useful things. Almost all the projects I came across on GitHub that used C wrote a version that seemed very foreign to me😄. As a beginner, I could barely understand what was going on in the projects, not to mention where to start contributing. I later on explored web development with Laravel and then React. However, all I was exposed to was not really "exciting" for me. I felt like I wanted to do something different. &lt;/p&gt;

&lt;p&gt;When they cut out our internet in about 2017, I have always fantasized about being a hacker and finding a way to restore it completely 😂. I think this was one of the things that led me to explore Cybersecurity. &lt;br&gt;
I began my cybersecurity journey with &lt;a href="https://tryhackme.com/" rel="noopener noreferrer"&gt;tryhackme.com&lt;/a&gt;, and was later accepted into the &lt;a href="https://cybersafefoundation.org/our-programs/cybergirls-fellowship/" rel="noopener noreferrer"&gt;CyberGirls Fellowship&lt;/a&gt; program, a rigorous one-year program designed to encourage women to enter the field of cybersecurity. I participated in the Vulnerability and Penetration Testing track, and although it was exciting, I didn't get to write a lot of code during this time. Also, because I was doing this while being in engineering school, I had little time to work on personal projects that would have me writing code. &lt;/p&gt;

&lt;p&gt;This made me realise that I would not just like to test the security of systems but also be actively involved in building those secure systems. Day-by-day, I am discovering more of what this means to me.&lt;/p&gt;

&lt;h3&gt;
  
  
  Community
&lt;/h3&gt;

&lt;p&gt;I have always loved the idea of contributing to tech communities. It's been fulfilling to see people discover other possibilities or start a career in tech because of an event that I contributed in organizing. Most times, it feels like being part of a cause that is bigger than myself. I haven't figured out a lot, and seeing people figure out even what remains a mystery to me, because I was able to bring together a group of people to have a workshop or give a couple of talks, is extremely satisfying. &lt;/p&gt;

&lt;p&gt;At GDG Buea, our motto is: A Community that Codes, and I'm very much interested in contributing to building a community of practitioners who build software at a global standard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovering OCaml
&lt;/h3&gt;

&lt;p&gt;Before this internship, all I knew about functional programming was probably only what was briefly mentioned in a course at school where we talked about programming paradigms. &lt;br&gt;
Despite not knowing any OCaml before, I randomly picked a project in the OCaml organization because the project's chats were not already saturated with people wanting to contribute, so I thought this would increase my chances of getting in. &lt;br&gt;
Unlike the last time that I got accepted into the contribution phase, I actually sought guidance and support from Day 1 after being selected this time. I was constantly reminded to ask questions in the channels whenever I got stuck and to share my progress on the tasks I was assigned. &lt;br&gt;
I am contributing to an &lt;a href="https://github.com/geocaml/ocaml-tiff" rel="noopener noreferrer"&gt;OCaml Library for reading and writing tiff&lt;/a&gt; files, and it sounded all so exciting and somewhat scary from the start. The project description mentioned extending the library's support for GeoTiff files, which store geospatial data, just what an explorer will like😄. &lt;br&gt;
I read and re-read the existing code, and started learning OCaml with the &lt;a href="https://realworldocaml.org" rel="noopener noreferrer"&gt;Real World OCaml &lt;/a&gt; book. Picked an issue, worked on it, got blocked, got unblocked by mentors, made pull requests, and now I'm here interning in the OCaml organization, contributing directly to a library, and working with some of the best engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Now
&lt;/h2&gt;

&lt;p&gt;I am continuously learning OCaml while making contributions to the tiff library with the help of my amazing mentor, Patrick Ferris. I put in maximum effort each day to work towards the project's goal of the library being able to write TIFF files.&lt;/p&gt;

&lt;p&gt;One of my favorite talks is by Olumide Balogun, who constantly reminds me about &lt;a href="https://youtu.be/xoWGS6YZ9E8?si=r-qgyXDP1YkdNJGW" rel="noopener noreferrer"&gt;Doing Hard Things&lt;/a&gt;, which I remind myself about each time I get stuck.&lt;/p&gt;

&lt;p&gt;I look forward to working extensively with OCaml, working closely with other OCaml developers, and contributing to the mission of making OCaml mainstream.&lt;/p&gt;

</description>
      <category>outreachy</category>
      <category>opensource</category>
      <category>ocaml</category>
    </item>
    <item>
      <title>Common Security Vulnerabilities in Dockerfiles</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Thu, 14 Dec 2023 17:00:43 +0000</pubDate>
      <link>https://dev.to/giftcup/common-security-vulnerabilities-in-dockerfiles-lbm</link>
      <guid>https://dev.to/giftcup/common-security-vulnerabilities-in-dockerfiles-lbm</guid>
      <description>&lt;p&gt;According to &lt;a href="https://docs.docker.com"&gt;Docker Docs&lt;/a&gt;, a &lt;strong&gt;Dockerfile&lt;/strong&gt; is a text document that contains all the instructions a user could call on the command line to assemble an image. These instructions include actions like installing software, copying files, setting environment variables, and defining how an application should run.&lt;/p&gt;

&lt;p&gt;Wrongly implementing instructions in a Dockerfile might present vulnerabilities which attackers can exploit to gain root access to the underlying infrastructure, gain privileges by using exposed secrets, and so much more.&lt;/p&gt;

&lt;p&gt;I have compile a list of common docker security issues using yaml. For each issue, I defined a regex rule which can be used statically analyze your dockerfile. &lt;br&gt;
There are some limitations to using regex rules for static analysis, some of which include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dockerfile syntax may evolve over time causing the current regex patterns to become outdated.&lt;/li&gt;
&lt;li&gt;Regex may not interprete the intended purpose of certain instructions.&lt;/li&gt;
&lt;li&gt;Regex patterns may struggle in understanding multiline and nested instructions found in Dockerfiles.&lt;/li&gt;
&lt;li&gt;They may lead to false positives or false negatives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the .yaml rules in &lt;a href="https://github.com/giftcup/secure-code-scanning/tree/main/VulDocker"&gt;this repository&lt;/a&gt;. I will appreciate your feedback and contributions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Security Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Using the 'latest' tag for base images.
&lt;/h3&gt;

&lt;p&gt;It is advicable to pin your docker image to a particular version instead of using the 'latest' tag which results to a different version at different points in time.&lt;/p&gt;

&lt;p&gt;Using the latest tag doesn't gaurantee that you are using the most recent and stable version of an image. However, it might introduce breaking changes in the best worst case scenario or even security vulnerabilities in the worst worst case. &lt;/p&gt;

&lt;p&gt;Always pin your images and dependencies to particular versions and update them regularly.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Running as Root
&lt;/h3&gt;

&lt;p&gt;Secure system design follows the &lt;a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege"&gt;principle of least privilege&lt;/a&gt;. This simply states that a process should be able to access only the information and resources it needs to accomplish its legitimate purpose and nothing more. &lt;/p&gt;

&lt;p&gt;Running processes as root in a Dockerfile can give access to code or other users to maliciously to gain full access to your container and host system. Not defining a user at all can also lead to unexpected behaviour that can lead to granting root access to non-root users or code.&lt;/p&gt;

&lt;p&gt;To avoid this, always define a non-root user in your dockerfile that will be used to execute commands. This is done using the USER directive in your dockerfile.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Using Sudo to execute commands
&lt;/h3&gt;

&lt;p&gt;Just as you shouldn't use the root user, you shouldn't use sudo to run commands either as it also violates the principle of least privilege.&lt;/p&gt;

&lt;p&gt;When running as a user, also ensure the user is not in the sudoers file to avoid the complications that come from running as root.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Copying Unnecessary Files
&lt;/h3&gt;

&lt;p&gt;Unnecessary files in your docker image can increase the size, complexity, and attack surface of your image. It can also slow down the build and run time of your docker container. &lt;/p&gt;

&lt;p&gt;It is a Dockerfile best practice to create a subfolder that contains the files that need to be copied to the docker container image. Also, where possible, use a &lt;strong&gt;.dockerignore&lt;/strong&gt; file to explicitly exclude files and directories. Avoid using wildcards like '.' when defining a COPY directive.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Exposed Secrets
&lt;/h3&gt;

&lt;p&gt;Never store secrets or credentials in any form of docker instructions such as environment variables, args or hardcoded into any command.&lt;/p&gt;

&lt;p&gt;You could make use of Docker Secrets to provide the values as environment variables.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Using Vulnerable Images
&lt;/h3&gt;

&lt;p&gt;This kind of vulnerability is introduced from the hierarchy of layers used to build the container image. It provides a risk to supply chain attacks&lt;/p&gt;

&lt;p&gt;Untrusted base images are the main source of such vulnerabilities and should be avoided at all cost. The source of the base image should be verified and the image should be updated regularly.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>security</category>
    </item>
    <item>
      <title>Common Security Vulnerabilities in Dockerfiles</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Thu, 14 Dec 2023 17:00:43 +0000</pubDate>
      <link>https://dev.to/giftcup/common-security-vulnerabilities-in-dockerfiles-56ch</link>
      <guid>https://dev.to/giftcup/common-security-vulnerabilities-in-dockerfiles-56ch</guid>
      <description>&lt;p&gt;According to &lt;a href="https://docs.docker.com"&gt;Docker Docs&lt;/a&gt;, a &lt;strong&gt;Dockerfile&lt;/strong&gt; is a text document that contains all the instructions a user could call on the command line to assemble an image. These instructions include actions like installing software, copying files, setting environment variables, and defining how an application should run.&lt;/p&gt;

&lt;p&gt;Wrongly implementing instructions in a Dockerfile might present vulnerabilities which attackers can exploit to gain root access to the underlying infrastructure, gain privileges by using exposed secrets, and so much more.&lt;/p&gt;

&lt;p&gt;I have compile a list of common docker security issues using yaml. For each issue, I defined a regex rule which can be used statically analyze your dockerfile. &lt;br&gt;
There are some limitations to using regex rules for static analysis, some of which include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dockerfile syntax may evolve over time causing the current regex patterns to become outdated.&lt;/li&gt;
&lt;li&gt;Regex may not interprete the intended purpose of certain instructions.&lt;/li&gt;
&lt;li&gt;Regex patterns may struggle in understanding multiline and nested instructions found in Dockerfiles.&lt;/li&gt;
&lt;li&gt;They may lead to false positives or false negatives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the .yaml rules in &lt;a href="https://github.com/giftcup/secure-code-scanning/tree/main/VulDocker"&gt;this repository&lt;/a&gt;. I will appreciate your feedback and contributions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Security Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Using the 'latest' tag for base images.
&lt;/h3&gt;

&lt;p&gt;It is advicable to pin your docker image to a particular version instead of using the 'latest' tag which results to a different version at different points in time.&lt;/p&gt;

&lt;p&gt;Using the latest tag doesn't gaurantee that you are using the most recent and stable version of an image. However, it might introduce breaking changes in the best worst case scenario or even security vulnerabilities in the worst worst case. &lt;/p&gt;

&lt;p&gt;Always pin your images and dependencies to particular versions and update them regularly.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Running as Root
&lt;/h3&gt;

&lt;p&gt;Secure system design follows the &lt;a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege"&gt;principle of least privilege&lt;/a&gt;. This simply states that a process should be able to access only the information and resources it needs to accomplish its legitimate purpose and nothing more. &lt;/p&gt;

&lt;p&gt;Running processes as root in a Dockerfile can give access to code or other users to maliciously to gain full access to your container and host system. Not defining a user at all can also lead to unexpected behaviour that can lead to granting root access to non-root users or code.&lt;/p&gt;

&lt;p&gt;To avoid this, always define a non-root user in your dockerfile that will be used to execute commands. This is done using the USER directive in your dockerfile.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Using Sudo to execute commands
&lt;/h3&gt;

&lt;p&gt;Just as you shouldn't use the root user, you shouldn't use sudo to run commands either as it also violates the principle of least privilege.&lt;/p&gt;

&lt;p&gt;When running as a user, also ensure the user is not in the sudoers file to avoid the complications that come from running as root.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Copying Unnecessary Files
&lt;/h3&gt;

&lt;p&gt;Unnecessary files in your docker image can increase the size, complexity, and attack surface of your image. It can also slow down the build and run time of your docker container. &lt;/p&gt;

&lt;p&gt;It is a Dockerfile best practice to create a subfolder that contains the files that need to be copied to the docker container image. Also, where possible, use a &lt;strong&gt;.dockerignore&lt;/strong&gt; file to explicitly exclude files and directories. Avoid using wildcards like '.' when defining a COPY directive.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Exposed Secrets
&lt;/h3&gt;

&lt;p&gt;Never store secrets or credentials in any form of docker instructions such as environment variables, args or hardcoded into any command.&lt;/p&gt;

&lt;p&gt;You could make use of Docker Secrets to provide the values as environment variables.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Using Vulnerable Images
&lt;/h3&gt;

&lt;p&gt;This kind of vulnerability is introduced from the hierarchy of layers used to build the container image. It provides a risk to supply chain attacks&lt;/p&gt;

&lt;p&gt;Untrusted base images are the main source of such vulnerabilities and should be avoided at all cost. The source of the base image should be verified and the image should be updated regularly.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>security</category>
    </item>
    <item>
      <title>When Container Orchestration is an Overkill</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Mon, 30 Oct 2023 15:42:55 +0000</pubDate>
      <link>https://dev.to/giftcup/when-container-orchestration-is-an-overkill-44pk</link>
      <guid>https://dev.to/giftcup/when-container-orchestration-is-an-overkill-44pk</guid>
      <description>&lt;p&gt;Technology is becoming fascinating each day and the hype growing around concepts like container orchestration tools (e.g Kubernetes ☸️) keeps growing 📈. On one hand, these technologies are indeed amazing and companies are using them to better meet the needs of their users. &lt;/p&gt;

&lt;p&gt;However, because of the hype around these technologies, a lot of companies both new and old are beginning to over-engineer their solutions, increasing the complexity of their products and focusing on how they have implemented these technologies instead of meeting the basic needs of their users 🤧 which can be done with simpler solutions. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This scenario can be likened to using an 🪓 axe to kill an 🐜 ant, which is totally not necessary. You can just step on it with your feet😂 (p.s. Don't kill ants, save the planet🌚)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This over-engineering can cause organizations to loose a lot of money, it can even deviate their attention from building solutions to setting up systems and overall, it will needlessly increase the complexity of the project.&lt;/p&gt;

&lt;p&gt;One of such concepts that is continually being used recently to over-engineer solutions is container orchestration😕. In this article, we would look at what container orchestration is, and a brief history of how it came about🕰️. Then, we will then look at the benefits of using a container orchestration system, and finally when it can become an overkill.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Container Orchestration☸️?
&lt;/h2&gt;

&lt;p&gt;Container orchestration automates the provisioning, deployment, networking, scaling, availability, and lifecycle management of containers. Some popular container orchestration tools are: Apache Mesos, Docker Swarm, and Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgre1dh5sfuuqqs099rhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgre1dh5sfuuqqs099rhe.png" alt="Examples of Orchestration services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Application development in recent times has moved from a monolithic approach 💧 to using microservice 💦. Microservices are individual units of software that provide all of the functions required to run an application when combined together. Typically, each microservice handles a discrete type of functionality. An application can have separate services for image recognition🔴, authentication🟡, database🔵 and the UI⚪. Each of these would communicate using lightweight mechanisms like HTTP resource APIs to be interfaced as one application 🔴🟡🔵⚪.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvxr2v2ycjnb6s3zbfa4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvxr2v2ycjnb6s3zbfa4.png" alt="Microservice application overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using containers📦, the architectural paradigm of using microservices can easily be implemented. Containers can be used to host the individual microservices that form an application. This means that a containerized application can translate into operating hundreds or thousands of containers, especially when building and operating large scale systems.&lt;/p&gt;

&lt;p&gt;The complexity of this application increases if these containers are to be managed manually. However, with container orchestration, the complexity of managing containers is reduced as orchestration provides a declarative way of automating most of the work 🔁.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Orchestration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt;🔒 Container orchestration automated approach can help reduce the chance of human error, hence, keeping containerized applications secure. Also, because containers run in isolated environments, container orchestration tools can share only specific resources between users, reducing the risk of data breaches and other traditional security vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved development:&lt;/strong&gt;🏂 Orchestration allows teams to easily roll out and roll back versions or features with ease. This makes user needs to be met faster as the process of developement, testing, and delivering of software has become faster, repeatable and more efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Saving:&lt;/strong&gt;💰 Container orchestration systems require less human effort enabling companies to save money by requiring less staff to build and manage systems. Also, orchestration systems maintains optimal usage of processing and memory resources, removing unnecessary overhead expenses.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When is Orchestration an Overkill?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;overkill&lt;/strong&gt; ( &lt;strong&gt;&lt;em&gt;noun&lt;/em&gt;&lt;/strong&gt; )&lt;br&gt;
&lt;em&gt;meaning&lt;/em&gt;: excessive use, treatment, or action.&lt;br&gt;
&lt;em&gt;example&lt;/em&gt;: "animators now face a dilemma of technology overkill"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  For Small Startups 🤏
&lt;/h3&gt;

&lt;p&gt;Setting up a container orchestration system might be too complex🤹‍♀️ and time-consuming⏱️ for these kind of teams. Setting up orchestration systems is a major project that might end up disrupting existing workflows.&lt;/p&gt;

&lt;p&gt;For example, in the case of Kubernetes, worker nodes need to be configured with memory, disk speed, storage capacity and much more. Also, you would need to set up access control management and security for the kubernetes clusters and also configure the control plane. After completing the steps for setting up the kubernetes environment, they need to be regularly managed, maintained and monitored. This increases the complexity of the project and might deviate the team to concentrate on managing infrastructure instead of solving user problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Having Minimal Scalability Needs ⚖️
&lt;/h3&gt;

&lt;p&gt;Orchestration might introduce unnecessary overhead for applications which have consistently low traffic or whose resource demands are predictable. When using orchestration tools for applications with low or predictable traffic, overheads can be incurred in terms of architectural decision and design when taking scalability into account. &lt;/p&gt;

&lt;p&gt;Also, a part of the scalability planning phase is &lt;strong&gt;capacity planning&lt;/strong&gt; which involves gathering data, performing calculations and making projections based on the data. The scalability of the system would then need to be tested and validated by building test setups, generating data for the test and performing analysis on the test results. &lt;/p&gt;

&lt;p&gt;All of these steps would just end up lengthening the development time planning for scales that would not be met in a very long run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost 💸
&lt;/h3&gt;

&lt;p&gt;Container orchestration systems require more infrastructure, including servers, storage and networking. There is also additional cost incurred in managing and maintenance of these systems which includes time and resources needed to train the team and also salaries of skilled employees.&lt;/p&gt;

&lt;p&gt;Although Kubernetes provides a way to ensure reliability and scalability of applications, it doesn't explicitly provide a way to measure and manage cost. Because of the dynamic nature of containers in the kubernetes environment it can become more complex to implement cost optimization as understanding the interplay between various components, configurations, and resource utilization patterns is needed. &lt;/p&gt;

&lt;p&gt;Also, kubernetes operates at the infrastructure level making allocating cost to specific applications or workloads even more complex. Organisations would need to setup 3rd party applications in order to accomplish this.&lt;/p&gt;

&lt;p&gt;Lastly, using a managed Kubernetes service such as Amazon's EKS, Azure Kubernetes Service or Google's Kubernetes Engin can come with with additional cost such as cluster management fees, control plane charges and other fees depending on the services provided by the cloud provider. These services are not really necessary for simple solutions like; a single-page application which has a database and cache. This types of applications can simply be deployed on a VPS. Container orchestration is best suited for applications running multiple microservices that need to communicate with one another.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we have seen, there are benefits to using a container orchestration system. However, before deciding to use one, carefully consider if it is absolutely necessary. Do you really need it, or would it just be adding needless complexity and cost to your current workflow? Perhaps you can make you of a simpler solution. &lt;/p&gt;

&lt;p&gt;Always remember that whatever you build is to satisfy the user first before considering how amazing you might feel from using popular tools 😂. I wish you the best in your development journey ❤️ &lt;/p&gt;




&lt;p&gt;Additional Resources&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://circleci.com/blog/what-is-container-orchestration/" rel="noopener noreferrer"&gt;Benefits of Container Orchestration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.techtarget.com/searchapparchitecture/tip/How-microservices-and-containers-work-apart-and-together" rel="noopener noreferrer"&gt;Overview of Microservices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thenewstack.io/kubernetes-isnt-always-the-right-choice/" rel="noopener noreferrer"&gt;Kubernetes Isn't Always the Right Choice
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Thank you for reading through till the end😊. I hope it helped you in one way or the other to understand and use a particular concept. If you enjoyed reading this, do leave a like ❤️ and a comment stating how I can improve 💡&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Overview of Containerization with Docker.</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Tue, 24 Oct 2023 15:38:48 +0000</pubDate>
      <link>https://dev.to/giftcup/overview-of-containerization-with-docker-2c9h</link>
      <guid>https://dev.to/giftcup/overview-of-containerization-with-docker-2c9h</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Docker
&lt;/h2&gt;

&lt;p&gt;According to Wikipedia, &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are lightweight and contain everything necessary to run an application, such that they become the unit for distribution and testing of applications.&lt;/p&gt;

&lt;p&gt;Docker helps developers build, share, and run applications anywhere which significantly reduces the delay between writing code and running it in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h47QhcEh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m58w1d9n2ggvkj464xf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h47QhcEh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m58w1d9n2ggvkj464xf6.png" alt="Image description" width="415" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Images
&lt;/h3&gt;

&lt;p&gt;An image is a read-only template with instructions for creating a container. Most times, an image is based on another image known as the &lt;strong&gt;Base image&lt;/strong&gt;.&lt;br&gt;
A &lt;strong&gt;Dockerfile&lt;/strong&gt; contains the different steps to create an image and run it. &lt;br&gt;
Images can also be downloaded from a public registry such as &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt; which is a cloud hosted registry that allows developers to share container images. Companies can also have their private repositories where they store container images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Containers
&lt;/h3&gt;

&lt;p&gt;A container is a runnable instance of an image. It defines a way to package applications with all necessary dependencies and configurations. This makes  the application easy to share. It also makes development and deployment more efficient.&lt;/p&gt;

&lt;p&gt;Before containers, all services needed to be installed independently by each developer in the same team when working on the same application. The installation steps depended on the operating system each developer uses. There were also multiple steps used to complete this installation making the process was very error prone and tedious.&lt;/p&gt;

&lt;p&gt;However, with containers, no installation needs to be done directly on the operating system because the container is an isolated system on its own with a Linux base OS. Containers are packaged with all the needed configurations for particular services and they need to only be pulled and used.&lt;/p&gt;

&lt;p&gt;When a container is removed, any changes to its state that are not stored in a persistent storage disappear.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Containerization
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Portability&lt;/strong&gt;: Containers encapsulate all configurations and dependencies needed to run an application which are abstracted away from the host operating system. This facilitates the portability of the container from one environment. Because of this ease in portability, agile processes can easily be implemented for rapid application development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Containers are light-weight, cheap, and can easily be launched. This makes it easier and faster to scale systems, and improve responsiveness and performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Containers isolate the processes of one container from another, as well as from the host system. Thus invasion of malicious code in one container is inherently prevented from affecting others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security in Docker
&lt;/h2&gt;

&lt;p&gt;When docker is used properly, it can increase the overall security posture of the application in comparison to running applications directly on the host. However, misconfigurations can lead downgrade the of security and even introduce new vulnerabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set resource quotas&lt;/strong&gt;  to limit the amount of memory and CPU resources that a container can consume. This feature helps prevent one application or container from using all system resources which can be used for a Denial of Service (DoS) attack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do not run as a root&lt;/strong&gt;: Running docker as an unprivileged user ensures that if an attackers breaks out of a container, they will not have root privileges on the host, hence limiting the attack surface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure the container registries&lt;/strong&gt;: Container registries make it easy to set up a central repository from where container images can easily be pulled. The container registry you use should allow only particular users to upload or download from the registry.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 If you are interested in knowing more about docker security, you can check out this post on &lt;a href="https://spacelift.io/blog/docker-security"&gt;21 Docker Security Best Practices - Daemon, Image &amp;amp; Container&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Thank you for reading through till the end😊. I hope it helped you in one way or the other to understand and use a particular concept. If you enjoyed reading this, do leave a like ❤️ and a comment stating how I can improve 💡&lt;/p&gt;

</description>
      <category>docker</category>
      <category>security</category>
      <category>containers</category>
    </item>
    <item>
      <title>Automatically Provision AWS Resources with Terraform</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Wed, 20 Sep 2023 16:59:13 +0000</pubDate>
      <link>https://dev.to/giftcup/automatically-provision-aws-resources-with-terraform-1dap</link>
      <guid>https://dev.to/giftcup/automatically-provision-aws-resources-with-terraform-1dap</guid>
      <description>&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt; provides a way of managing and provisioning infrastructure through code instead of manually. This helps improve infrastructure consistencies and increases speed for deployments as the same code can be used to provision multiple deployment environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; is an infrastructure as code tool that lets you build, change, and version infrastructure safely and efficiently. It allows you monitor and build infrastructure for multiple cloud platforms.&lt;br&gt;
Terraform plugins (&lt;strong&gt;providers&lt;/strong&gt;) let you interact with cloud platforms and other services through their APIs.&lt;br&gt;
A &lt;strong&gt;provider&lt;/strong&gt; is a plugin that terraform uses to create and manage resources. You can view a list of providers offered by terraform providers &lt;a href="https://registry.terraform.io/browse/providers"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we would show how to automatically deploy an Amazon RDS MySQL instance, an ElastiCache Redis cluster and a Lambda function all in the same VPC with security group rules that enable the Lambda function to interact with the Redis cluster and the MySQL database. This deployment would be done using terraform.&lt;/p&gt;
&lt;h3&gt;
  
  
  Clone the Sample Repository
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:giftcup/terraform.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then move into the &lt;em&gt;lambda-serverless&lt;/em&gt; directory to view the sample code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd lambda-serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli"&gt;Terraform 0.14+&lt;/a&gt; installed locally&lt;/li&gt;
&lt;li&gt;An &lt;a href="https://portal.aws.amazon.com/billing/signup?nc2=h_ct&amp;amp;src=default&amp;amp;redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation#/start"&gt;AWS account&lt;/a&gt; with credentials &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication"&gt;configured for Terraform&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verify you have terraform installed by running the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using the AWS Provider
&lt;/h2&gt;

&lt;p&gt;The AWS provider allows you connect and interact with services and resources offered by AWS.&lt;br&gt;
In our configuration, we will specify the provider and its version, the region, and the availability zones we want our resources to be deployed in&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
}

# Configure AWS Provider
provider "aws" {
  region = "us-west-2"
}

data "aws_availability_zones" "available" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a VPC [Virtual Private Cloud]
&lt;/h2&gt;

&lt;p&gt;Using the &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest"&gt;terraform-aws-vpc&lt;/a&gt; module, we create a VPC resource where we want all other resources to reside in&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"

  name                 = "second-vpc"
  cidr                 = "10.10.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  public_subnets       = ["10.10.3.0/24", "10.10.4.0/24", "10.10.5.0/24"]
  enable_dns_hostnames = true
  enable_dns_support   = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add Security Group Rules
&lt;/h2&gt;

&lt;p&gt;The security group rules here should enable us connect with the Elasticache Redis cluster and the RDS MySQL database from our Lambda function, all of which we will create later on. &lt;/p&gt;

&lt;p&gt;You must specify the &lt;strong&gt;from_port&lt;/strong&gt; and the &lt;strong&gt;to_port&lt;/strong&gt; in the egress and ingress rules&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "second-sg" {
  name   = "second-sg"
  vpc_id = module.vpc.vpc_id

  ingress {
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }

  ingress {
    from_port   = 6379
    to_port     = 6379
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }

  egress {
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }

  egress {
    from_port   = 6379
    to_port     = 6379
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure the RDS MySQL Database
&lt;/h2&gt;

&lt;p&gt;Firstly define the subnet group that you would want your RDS instance to be in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_db_subnet_group" "second-subnet" {
  name       = "second"
  subnet_ids = module.vpc.public_subnets

  tags = {
    Name = "Second"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The subnets specified here are the subnets that belong to the VPC above.&lt;/p&gt;

&lt;p&gt;The database instance is created as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_db_instance" "firsTerraDB" {
  identifier             = "second-terra-db"
  allocated_storage      = 10
  db_name                = var.db_name
  engine                 = "mysql"
  engine_version         = "8.0"
  instance_class         = "db.t2.micro"
  username               = var.db_username
  password               = var.db_password
  parameter_group_name   = "default.mysql8.0"
  db_subnet_group_name   = aws_db_subnet_group.second-subnet.name
  vpc_security_group_ids = [aws_security_group.second-sg.id]
  publicly_accessible    = true
  skip_final_snapshot    = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The value, &lt;strong&gt;publicly_accessible&lt;/strong&gt; is set to true only for the sake of this tutorial. You would not want set this configuration as 'true' for a database in a production environment.&lt;/p&gt;

&lt;p&gt;Set &lt;strong&gt;skip_final_snapshot&lt;/strong&gt; as 'true' if you do not want a snapshot of the instance to be taken upon deletion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Sensitive Variables
&lt;/h3&gt;

&lt;p&gt;Sensitive values like the database &lt;em&gt;password, username and db_name&lt;/em&gt; should not be written in plain text. These variables should first be declared as input variables in the &lt;strong&gt;variables.tf&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "db_name" {
  description = "Database name"
  type        = string
  sensitive   = true
}

variable "db_username" {
  description = "Master Username"
  type        = string
  sensitive   = true
}

variable "db_password" {
  description = "Master password"
  type        = string
  sensitive   = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables declared as &lt;strong&gt;sensitive&lt;/strong&gt; are redacted from Terraform's output when commands like apply, plan or destroy are executed. However, note that these values will appear as plain text in the terraform state files, so ensure that the state file is kept safely.&lt;/p&gt;

&lt;p&gt;With this, each time you run &lt;strong&gt;&lt;em&gt;terraform apply&lt;/em&gt;&lt;/strong&gt;, you will be prompted to enter the value of each variable, but this can be quite time consuming and error prone. To solve this, Terraform supports setting values within a variable definition(&lt;strong&gt;.tfvars&lt;/strong&gt;) file. &lt;/p&gt;

&lt;p&gt;Create a new file called &lt;strong&gt;&lt;em&gt;secrets.tfvars&lt;/em&gt;&lt;/strong&gt;, and assign values for the variables which were created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db_name     = "databaseName"
db_username = "username"
db_password = "insecurepassword1"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the variables with values can easily be used with &lt;strong&gt;terraform apply&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -var-file=secrets.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since these values are sensitive, make sure to maintain and share the &lt;strong&gt;tfvars&lt;/strong&gt; file with only the appropriate people and also ensure you do not check these files into version control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the ElastiCache Redis Cluster
&lt;/h2&gt;

&lt;p&gt;First create the elasticache subnet group. Here, we would use the subnets that belong to the vpc module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elasticache_subnet_group" "second-cluster-subnet" {
  name       = "second-cluster-subnet"
  subnet_ids = module.vpc.public_subnets
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The redis instance is then created as shown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elasticache_cluster" "second-cluster" {
  cluster_id           = "second-cluster-id"
  engine               = "redis"
  node_type            = "cache.t4g.micro"
  num_cache_nodes      = 1
  parameter_group_name = "default.redis5.0"
  engine_version       = "5.0.6"
  port                 = 6379
  security_group_ids   = [aws_security_group.second-sg.id]
  subnet_group_name    = aws_elasticache_subnet_group.second-cluster-subnet.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The security group that was created earlier is used here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the Lambda Function
&lt;/h2&gt;

&lt;p&gt;This is done last so that some output from the above configurations can be used as input to the Lambda function.&lt;/p&gt;

&lt;p&gt;Firstly, create an IAM role for that will be used to manage the Lambda function. This role should have the permission to create a VPC. This would enable the function connect to the VPC that was created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]
  }
}

resource "aws_iam_role" "iam_for_lambda" {
  name               = "iam_for_lambda"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_role_policy_attachment" "AWSLambdaVPCAccessExecutionRole" {
  role       = aws_iam_role.iam_for_lambda.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since Lambda does not install any python packages, a lambda layer is created where all the modules used in our Lambda function are installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_layer_version" "function_packages" {
  filename            = "./code/packages.zip"
  layer_name          = "function_packages"
  compatible_runtimes = ["python3.9"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;filename&lt;/strong&gt; should be the relative path to this package folder and &lt;strong&gt;compatible_runtimes&lt;/strong&gt; is a list of runtime environments where the layer can work. Using Lambda layers is good because they enhance reusability.&lt;/p&gt;

&lt;p&gt;Next, create and archive for the Lambda function itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "archive_file" "lambda_function" {
  type        = "zip"
  source_file = "./code/lambda_function.py"
  output_path = "deployment_payload.zip"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;type&lt;/strong&gt; can be a zip file or an s3 bucket. For larger file sizes, it is advisable to use s3 buckets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;source_file&lt;/strong&gt;: path to the source code file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;output_path&lt;/strong&gt;: the file name were you want the zip of the function to be stored in. It does not have to be an existing file as it is created by terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, the configuration for the Lambda function is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "first_lambda" {
  filename      = "deployment_payload.zip"
  function_name = "first_function"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "lambda_function.lambda_handler"
  layers        = [aws_lambda_layer_version.function_packages.arn]
  timeout       = 150

  source_code_hash = data.archive_file.lambda_function.output_base64sha256

  runtime = "python3.9"

  vpc_config {
    subnet_ids         = module.vpc.public_subnets
    security_group_ids = [aws_security_group.second-sg.id]
  }

  environment {
    variables = {
      MYSQL_HOST     = aws_db_instance.firsTerraDB.address
      MYSQL_PORT     = aws_db_instance.firsTerraDB.port
      MYSQL_USER     = aws_db_instance.firsTerraDB.username
      MYSQL_PASSWORD = aws_db_instance.firsTerraDB.password
      MYSQL_DB       = aws_db_instance.firsTerraDB.db_name

      REDIS_URL  = "${aws_elasticache_cluster.second-cluster.cache_nodes.0.address}"
      REDIS_PORT = "${aws_elasticache_cluster.second-cluster.cache_nodes.0.port}"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case your Lambda function uses some environment variables,  they can be passed directly to the resource upon creation within the &lt;strong&gt;environment&lt;/strong&gt; block.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the Resources
&lt;/h2&gt;

&lt;p&gt;To provision the RDS instance, Redis Cluster, Lambda function and additional resource, first initialize the Terraform configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, apply the configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -var-file=secrets.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will now provision your resource. It may take some time for this to complete then you will see a message like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ajyKdkl7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7f0qkyazvdojgyz5jnaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ajyKdkl7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7f0qkyazvdojgyz5jnaa.png" alt="Apply complete" width="485" height="30"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can visit your AWS management consoles to view the various resources and test your Lambda function to ensure the connections to the Redis Cluster and the MySQL instance were established.&lt;/p&gt;

&lt;h3&gt;
  
  
  Output Variables
&lt;/h3&gt;

&lt;p&gt;Another way to view if our configuration details is to work with output variables. &lt;/p&gt;

&lt;p&gt;In an &lt;strong&gt;outputs.tf&lt;/strong&gt; file, define the values of the resources you want Terraform to show you after the configuration is applied:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "redis_host" {
  description = "Redis Host"
  value       = aws_elasticache_cluster.second-cluster.cache_nodes.0.address
  sensitive   = false
}

output "redis_port" {
  description = "Redis port"
  value       = aws_elasticache_cluster.second-cluster.cache_nodes.0.port
  sensitive   = false
}

output "mysql_host" {
  description = "mysql host"
  value       = aws_db_instance.firsTerraDB.address
  sensitive   = false
}

output "mysql_port" {
  description = "mysql port"
  value       = aws_db_instance.firsTerraDB.port
  sensitive   = false
}

output "elasticache-sg" {
  description = "Elasticache security group name"
  value       = aws_elasticache_cluster.second-cluster.security_group_ids
}

output "database-sg" {
  description = "database sg"
  value       = aws_db_instance.firsTerraDB.vpc_security_group_ids
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When terraform apply is executed, all this values would be displayed in displayed in the terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up Infrastructure
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you have provisioned an RDS instance, a Redis cluster and a Lambda function using Terraform. &lt;br&gt;
Clean up the infrastructure you created with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy -var-file=secrets.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Thank you for reading through till the end😊. I hope it helped you in one way or the other to understand and use a particular concept. If you enjoyed reading this, do leave a like ❤️ and a comment stating how I can improve 💡&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>database</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Connect AWS Lambda to an Amazon ElastiCache Redis Cluster</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Tue, 12 Sep 2023 10:26:57 +0000</pubDate>
      <link>https://dev.to/giftcup/how-to-connect-aws-lambda-to-an-amazon-elasticache-redis-cluster-5d9j</link>
      <guid>https://dev.to/giftcup/how-to-connect-aws-lambda-to-an-amazon-elasticache-redis-cluster-5d9j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon's ElastiCache is a service that enables you to launch, manage, and scale a distributed in-memory cache. It has multiple use cases caching, real-time analysis, machine learning, and session storage.&lt;/p&gt;

&lt;p&gt;The focus of this tutorial would be on the caching use case with Redis. Applications that already use Redis can use ElastiCache with very little modifications. &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Make sure to delete your cluster when you are done using it in order not to incur additional cost. Visit this site to see more about &lt;a href="https://aws.amazon.com/elasticache/pricing/" rel="noopener noreferrer"&gt;ElastiCache pricing&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating an ElastiCache Redis Cluster
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;From your &lt;a href="https://console.aws.amazon.com/console/home" rel="noopener noreferrer"&gt;AWS Management Console&lt;/a&gt;, click on &lt;strong&gt;Services&lt;/strong&gt; and navigate to &lt;strong&gt;Database&lt;/strong&gt; where you would find &lt;strong&gt;ElastiCache&lt;/strong&gt;. Click on it to open your ElastiCache console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9aywwa62sumshb8w3q3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9aywwa62sumshb8w3q3.png" alt="Locate ElastiCache"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the side bar, under &lt;strong&gt;Resources&lt;/strong&gt;, click on &lt;strong&gt;Redis Cluster&lt;/strong&gt;. The window show you all Redis clusters you have. To create a new cluster, click the &lt;strong&gt;Create Redis Cluster&lt;/strong&gt; at the top.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63m2deeg5v0swp6zuz1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63m2deeg5v0swp6zuz1g.png" alt="Redis Cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Easy Create&lt;/strong&gt; for the cluster creation method, and choose &lt;strong&gt;Demo&lt;/strong&gt; for the configuration type.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs21wifkh6d0nqdnqsla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs21wifkh6d0nqdnqsla.png" alt="creation method"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give your cluster a name and a brief description. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmo9hqzoj1agaxbze4saf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmo9hqzoj1agaxbze4saf.png" alt="cluster info"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Under Connectivity, set the network type to &lt;strong&gt;IPv4&lt;/strong&gt;, select &lt;strong&gt;Choose an existing subnet group&lt;/strong&gt; and select the subnet group we used for the Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Create&lt;/strong&gt; to create your resource.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Connecting the Redis Cluster to a Lambda Function.
&lt;/h2&gt;

&lt;p&gt;Learn how to setup an AWS Lambda function from from this &lt;a href="https://dev.to/giftcup/how-to-setup-an-aws-lambda-function-with-python-and-connect-with-an-amazon-rds-database-2gbf"&gt;article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Lambda function, navigate to the &lt;strong&gt;Configuration&lt;/strong&gt; tab and on the side panel select &lt;strong&gt;VPC&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkxxp7x6rn99uaadwfww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkxxp7x6rn99uaadwfww.png" alt="update sg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to connect the Redis instance to the Lambda function, add the security group that the Redis instance belongs to the Lambda function.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Edit&lt;/strong&gt; and under &lt;strong&gt;Security groups&lt;/strong&gt;, select the Redis instance's security group and then save to update your configuration.&lt;/p&gt;

&lt;p&gt;Import Redis in your function and include the redis connection string in your code. For python:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from redis.cluster import RedisCluster

cache = RedisCluster(host="mycache.xxxx.xxx.xxx.cache.amazonaws.com", port="6379")



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The host can be the the &lt;strong&gt;Configuration endpoint&lt;/strong&gt; if you have only one node, or the Primary endpoint or the endpoint of the node you need to assess. In this tutorial, the &lt;strong&gt;configuration endpoint&lt;/strong&gt; is used which can be found under &lt;strong&gt;Cluster details&lt;/strong&gt; in your redis cluster's dashboard.&lt;/p&gt;

&lt;p&gt;You can test your configuration with this piece of code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

if (cache.exists('names')) :
        return {
            'statusCode': 200,
            'body': cache.get('names')
       }
else :
        cache.set('names', "hello world")
        return "Done!"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>redis</category>
      <category>elasticache</category>
    </item>
    <item>
      <title>How to Setup an AWS Lambda Function with Python and Connect with an Amazon RDS Database.</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Mon, 11 Sep 2023 11:56:25 +0000</pubDate>
      <link>https://dev.to/giftcup/how-to-setup-an-aws-lambda-function-with-python-and-connect-with-an-amazon-rds-database-2gbf</link>
      <guid>https://dev.to/giftcup/how-to-setup-an-aws-lambda-function-with-python-and-connect-with-an-amazon-rds-database-2gbf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;AWS Lambda introduces a concept known as &lt;em&gt;serverless infrastructure&lt;/em&gt;. It is serverless in the sense that you don't have to worry about managing or provisioning of servers. They can automate operational processes, as well as complex application processes.&lt;/p&gt;

&lt;p&gt;AWS Lambda's main resource are Lambda functions which are used to perform specific tasks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying a Lambda Function with Python
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow this tutorial, you will need an AWS account. If you do not have one yet, you can follow this guide on how to &lt;a href="https://aws.amazon.com/getting-started/guides/setup-environment/"&gt;set-up your environment.&lt;/a&gt;&lt;br&gt;
The steps described here are under the Free Tier, so feel free to play around.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a Lambda Function
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Visit the &lt;a href="https://console.aws.amazon.com/console/home?region=us-east-1"&gt;AWS Management Console&lt;/a&gt; and under &lt;strong&gt;Services&lt;/strong&gt; select the &lt;strong&gt;Compute&lt;/strong&gt; tab and you'll find the &lt;strong&gt;Lambda&lt;/strong&gt; service. Select it to open the Lambda Console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PUSem8PE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zdqau3zpgsk8xx5er290.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PUSem8PE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zdqau3zpgsk8xx5er290.png" alt="Lambda Console" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Lambda console, navigate to &lt;strong&gt;Functions&lt;/strong&gt; on the left pane and select it. This window shows you all the Lambda functions you have previously create and if there are none yet, no data would be displayed in the table.&lt;/li&gt;
&lt;li&gt;At the top of this table, click the &lt;strong&gt;Create Function&lt;/strong&gt; button to start creating your Lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_nmY-Ejy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98javwhlezuv3huzlu97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_nmY-Ejy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98javwhlezuv3huzlu97.png" alt="Create Function" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;There are 3 options you could choose from to create a Lambda function; author from scratch, use a blueprint to build from sample code with configuration presets, and deploying the function from a container image.&lt;br&gt;
For this tutorial, we will use the &lt;strong&gt;Author from scratch&lt;/strong&gt; option&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LlEIxIAb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h99tktzn9z8xjbnxoi8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LlEIxIAb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h99tktzn9z8xjbnxoi8k.png" alt="Creation mode" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Basic Information&lt;/strong&gt; section, set the &lt;strong&gt;Function Name&lt;/strong&gt;, choose the language version to use for your function for &lt;strong&gt;Runtime&lt;/strong&gt;, select Architecture making sure it's compatible with the environment where you will build your deployment package(if applicable), and you can change the permissions for the execution role. &lt;br&gt;
We will use &lt;strong&gt;Python 3.9&lt;/strong&gt; as our runtime environment and &lt;strong&gt;x86_64&lt;/strong&gt; architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gtCDqgHj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5vxogz87bf3b620lgbpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gtCDqgHj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5vxogz87bf3b620lgbpm.png" alt="Basic Info Config" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In &lt;strong&gt;Advanced Settings&lt;/strong&gt;, you could enable Code signing to ensure the integrity your code, enable function URL to make your function accessible over http(s), enable tags or VPCs.
For this tutorial, we will leave all this options unchecked and click the &lt;strong&gt;Create Function&lt;/strong&gt; button to create the lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dpH1fyTa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywm9dc4ljtw55cm93jos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dpH1fyTa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywm9dc4ljtw55cm93jos.png" alt="Advanced" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lambda creates your function. You can use the console to modify your function's code or you could also upload from an S3 location or zip. &lt;br&gt;
The function was created with some basic code that returns a status code of 200 and "Hello from Lambda" text. &lt;/p&gt;

&lt;p&gt;The lambda function handler [in our case &lt;strong&gt;lambda_handler()&lt;/strong&gt;] is the method that processes events. The lambda runtime passes two arguments to the function handler;&lt;br&gt;
      - Event Object: json formatted and contains data for the lambda function to process&lt;br&gt;
      - Context Object: provide methods and properties that provide information about the invocation, function and runtime environment and it is passed to the function by Lambda at runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xqZlmeY9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2inyndykekc54766d146.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xqZlmeY9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2inyndykekc54766d146.png" alt="function overview" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bWnCcUyF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tdoppd4s3ux52kjkhki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bWnCcUyF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tdoppd4s3ux52kjkhki.png" alt="Boiler code" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To test this, choose the &lt;strong&gt;Test&lt;/strong&gt; tab. To invoke your function without saving the test event, click on &lt;strong&gt;Invoke&lt;/strong&gt;, lambda preserves the event only for as long as the session.&lt;br&gt;
Create a saved test by filling in the event details and choosing &lt;strong&gt;Save&lt;/strong&gt; then on &lt;strong&gt;Test&lt;/strong&gt; to invoke the function.&lt;br&gt;
If you have a saved test even, you can also edit it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3vUetyZp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqtd9pbx2eksztjcyx9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3vUetyZp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqtd9pbx2eksztjcyx9q.png" alt="Create test" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View the results of the test in the &lt;strong&gt;Execution results&lt;/strong&gt; tab. We can see that our function executed successfully and returned the desired result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_R0Eqjs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmjvaak6tlgpo8ounvgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_R0Eqjs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmjvaak6tlgpo8ounvgt.png" alt="execution results" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Include Python Dependencies to Lambda function: Deployment Packages
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import mysql.connector;

def lambda_handler(event, context):
    # TODO implement
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If we include the above code in our console, and test it, an error is returned stating the mysql module can't be found. This is because lambda is not prepackaged with all libraries and does not install any. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--glEIOAUV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vqgbqk0i7fx2nsxr5los.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--glEIOAUV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vqgbqk0i7fx2nsxr5los.png" alt="Module error" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hence, if your code contains dependencies, you have to include them by using a deployment package or a lambda layer. We will be looking at including these dependencies using a &lt;strong&gt;.zip&lt;/strong&gt; deployment package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the Deployment Package&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the the directory containing your lambda function on your computer, &lt;strong&gt;lambda_function.py&lt;/strong&gt;, in this case firstFunction.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd firstFunction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create a directory where you would install your dependencies
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir packages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Install your dependencies in this directory. In this instance, we will install &lt;strong&gt;mysql-connector-python&lt;/strong&gt; using pip, which enables connection to a mysql database. If you created your packages, simply save them to the &lt;strong&gt;packages&lt;/strong&gt; folder.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install --target ./packages mysql-connector-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create a zip file with the &lt;strong&gt;packages&lt;/strong&gt; folder at the root
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd packages
zip -r ../deployment_package.zip .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Add your lambda function to the root of the deployment package. The deployment package has to have a flat directory structure with everything at the root, if not, Lambda won't be able to run the code.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ..
zip deployment_package.zip lambda_function.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Under the &lt;strong&gt;Code&lt;/strong&gt; tab, select &lt;strong&gt;Upload&lt;/strong&gt; and click on &lt;strong&gt;.zip file&lt;/strong&gt; and select the &lt;strong&gt;deployment_package.zip&lt;/strong&gt; file you created earlier.&lt;br&gt;
For packages bigger than 50MB, it is advised to upload it to an S3 bucket then connect to the S3 location.&lt;/p&gt;

&lt;p&gt;This now works seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You could still do changes to your function directly on the console. Always click on &lt;strong&gt;Deploy&lt;/strong&gt; to save your changes before executing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Connecting an RDS database to the Lambda Function
&lt;/h2&gt;

&lt;p&gt;To get started with Amazon RDS for MySQL, visit this &lt;a href="https://dev.to/giftcup/how-to-deploy-a-managed-mysql-instance-in-an-amazon-rds-database-3dpm"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After you have created your MySQL instance, include the Lambda function in the same VPC as the MySQL RDS instance in order for them to communicate. &lt;/p&gt;

&lt;p&gt;In order to add the VPC to the Lambda function, you first need to modify the permissions attached to the role managing the function. To do this, open the IAM service in a new tab, click on &lt;strong&gt;Dashboard&lt;/strong&gt; then on &lt;strong&gt;Roles&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BDynb2Cu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0x8260ozsgnaid06drl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BDynb2Cu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0x8260ozsgnaid06drl8.png" alt="Find IAM service" width="800" height="477"&gt;&lt;/a&gt; &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HLAL1Vk9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0ao1rkpsrtzpla60lfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HLAL1Vk9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0ao1rkpsrtzpla60lfk.png" alt="Roles" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the role that manages your lambda function, the name would have your function name and some random characters. Create a policy that enables it to create and delete network interfaces. We will &lt;strong&gt;AWSLambdaVPCAccessExecutionRole&lt;/strong&gt; policy for that, but you also have the ability to write your own policies.&lt;/p&gt;

&lt;p&gt;Edit the Lambda function's VPC by navigating to the &lt;strong&gt;Configuration&lt;/strong&gt; Tab, then select &lt;strong&gt;VPC&lt;/strong&gt; on the side menu and click &lt;strong&gt;Edit&lt;/strong&gt; to add or edit a VPC. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HlIdVdJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5g43izx2iahtyodo1e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HlIdVdJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5g43izx2iahtyodo1e8.png" alt="Edit VPC" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--anRX3ADh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcowsky4apfr4850z8jw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--anRX3ADh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcowsky4apfr4850z8jw.png" alt="add VPC" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Return to your RDS instance's dashboard. Click on the &lt;strong&gt;Actions&lt;/strong&gt; drop down and select &lt;strong&gt;Set up Lambda connection&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Et_8xS7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qp4b7xblgs5tu4nkihqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Et_8xS7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qp4b7xblgs5tu4nkihqs.png" alt="Actions" width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Select Lambda function&lt;/strong&gt; section, select your lambda function from the dropdown.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;RDS Proxy&lt;/strong&gt; section, leave the &lt;strong&gt;Connect using RDS Proxy&lt;/strong&gt; unchecked to stay within Free Tier.&lt;/p&gt;

&lt;p&gt;You can use an RDS proxy to simplify your database connection and minimize your application from disruptions caused by database unavailability. &lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Set up&lt;/strong&gt; to create the connection to the lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MoShdekg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nytnp46fn3nt3moe5r5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MoShdekg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nytnp46fn3nt3moe5r5.png" alt="select function" width="662" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the mysql connection string to your python handler function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mydb = mysql.connector.connect (
           host = "&amp;lt;rds_instance_endpoint&amp;gt;",
           port = &amp;lt;rds_instance_port&amp;gt;,
           user = "&amp;lt;master_username&amp;gt;",
           password = "&amp;lt;master_password&amp;gt;",
           database = "&amp;lt;database_name&amp;gt;"
          )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you used an RDS proxy, host url should be the endpoint of the proxy instead.&lt;/p&gt;

&lt;p&gt;The database_name is the name of any database running on the RDS instance in use. You can create one by connecting to this instance from the mysql CLI or an editor and running the query to create a database.&lt;/p&gt;

&lt;p&gt;With all this in place, you will be able to query your database from the lambda function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Runtime Error: Handler Not Found&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"errorMessage": "Handler 'lambda_handler' missing on module 'lambda_function'",
  "errorType": "Runtime.HandlerNotFound"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This error occurs when Lambda can't resolve the Handler. Navigate to &lt;strong&gt;Runtime Settings&lt;/strong&gt; in your function's dashboard and click &lt;strong&gt;Edit&lt;/strong&gt; to modify the Handler. The Handler should have the format: &lt;strong&gt;fileName.functionName&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b-IalTOK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yyj81l5ny0aclwzxff7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b-IalTOK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yyj81l5ny0aclwzxff7d.png" alt="runtime settings" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--68jT0jJc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bej7ocqlsshreueoueu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--68jT0jJc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bej7ocqlsshreueoueu5.png" alt="edit handler" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudskills</category>
      <category>lambda</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to Deploy a Managed MySQL Instance in an Amazon RDS Database.</title>
      <dc:creator>Tambe Salome</dc:creator>
      <pubDate>Sun, 03 Sep 2023 10:14:22 +0000</pubDate>
      <link>https://dev.to/giftcup/how-to-deploy-a-managed-mysql-instance-in-an-amazon-rds-database-3dpm</link>
      <guid>https://dev.to/giftcup/how-to-deploy-a-managed-mysql-instance-in-an-amazon-rds-database-3dpm</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon Relational Database Service (RDS) is a web-service that makes it easier to set-up, operate, and scale-up relational databases in the cloud and manages common administration tasks. Amazon RDS currently supports the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL engines.&lt;/p&gt;

&lt;p&gt;AWS currently offers more than 15 purpose-built database options to support various database models. These models include: relational, key-value, in-memory, graph, time-series, wide-column and ledger databases. &lt;br&gt;
To choose what model to use, you should consider; The business objective, migration strategy from on-premise database to the cloud, purpose-built optimized for what you need, data considerations, security considerations and much more. To fully understand how to choose the best database service for your business, you can visit &lt;a href="https://aws.amazon.com/getting-started/decision-guides/databases-on-aws-how-to-choose/" rel="noopener noreferrer"&gt;AWS Decision Guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In this tutorial, we will be discussing how to deploy a managed MySQL instance using an Amazon RDS database. A managed service is preferred to setting it up manually on an EC2 instance because it fully provides continuous monitoring, self-healing storage, and auto-scaling to help you focus on the application development and your users.&lt;/p&gt;
&lt;h2&gt;
  
  
  Amazon RDS for MySQL
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow this tutorial, you will need an AWS account. If you do not have one yet, you can follow this guide on how to &lt;a href="https://aws.amazon.com/getting-started/guides/setup-environment/" rel="noopener noreferrer"&gt;set-up your environment&lt;/a&gt;.&lt;br&gt;
The steps described here are under the Free Tier, so feel free to play around.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create a MySQL DB Instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;To get started, open the &lt;a href="https://console.aws.amazon.com/console/home?region=us-east-1" rel="noopener noreferrer"&gt;AWS Management Console&lt;/a&gt;, then select &lt;strong&gt;Services&lt;/strong&gt; on the left of the top pane. This would display a list of services offered by AWS from Databases, to Containers and much more. Since we are interested in the &lt;strong&gt;Database&lt;/strong&gt; service, we click on it, then choose &lt;strong&gt;RDS&lt;/strong&gt; to open the &lt;strong&gt;Amazon RDS console&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bf7nf8jbkudsect5xcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bf7nf8jbkudsect5xcz.png" alt="Navigating to Amazon RDS Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;Amazon RDS Console&lt;/strong&gt;, the first thing to do is choose the region in which we want to create the database. &lt;strong&gt;Note&lt;/strong&gt;: Some regions are inaccessible depending on your account type.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfojdz8s1lqg8h1jye5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfojdz8s1lqg8h1jye5z.png" alt="Selecting a Region"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Regions are isolated form other regions as a fault tolerance and stability mechanism and also, data is not automatically transferred from one region to another. So, if you switch regions, you won't find the same data in the new region.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To create the database, scroll to the &lt;strong&gt;Create database&lt;/strong&gt; section and choose &lt;strong&gt;Create database&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthlydpy3kiamjv3j3qwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthlydpy3kiamjv3j3qwd.png" alt="Create database"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you would like to set your own configuration options including those for availability and security upon creation, the &lt;strong&gt;Standard Create&lt;/strong&gt; option should be used, otherwise, you can go with &lt;strong&gt;Easy Create&lt;/strong&gt; which uses the recommended best practices to configure your database some of which can be changed after creation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4zk6maqbc4s59uc54my.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4zk6maqbc4s59uc54my.png" alt="Creation Method"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the engine you would want to run from the list of &lt;strong&gt;Engine options&lt;/strong&gt; supported by RDS. For the purpose of this tutorial, the &lt;strong&gt;MySQL&lt;/strong&gt; engine would be used and the default version and edition should be maintained. Select the &lt;strong&gt;Free Tier&lt;/strong&gt; template&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd8dbbsr4x0tnp2skooi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd8dbbsr4x0tnp2skooi.png" alt="Choose Engine and Template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Region has multiple isolated locations known as Availability Zones. Enabling Multi-AZ deployments would automatically provision and maintain a synchronous standby in other Availability Zones to ensure fault tolerance in case of failure of one zone. However, this is a &lt;strong&gt;paid feature&lt;/strong&gt;, so it'll not be discussed in this tutorial.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The next thing to do is define some key settings for the DB instance. These include;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DB Instance Identifier&lt;/strong&gt; : unique name across all DB instances owned by your AWS account in the current region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master Username&lt;/strong&gt;: the username that will be used to login to the mysql instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master password&lt;/strong&gt;: A password that contains at least 8 ASCII characters (excluding ', ", @, and /)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confirm master password&lt;/strong&gt;: Retype the password you had above&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9f5tzchfs557c3ucp1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9f5tzchfs557c3ucp1u.png" alt="Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For the instance specification, select the &lt;strong&gt;db.t2.micro — 1vCPU, 1 GiB RAM&lt;/strong&gt; for the &lt;strong&gt;Instance Type&lt;/strong&gt;, &lt;strong&gt;General Purpose (SSD)&lt;/strong&gt; for &lt;strong&gt;Storage Type&lt;/strong&gt; and &lt;strong&gt;20 GB&lt;/strong&gt; for &lt;strong&gt;Allocated Storage&lt;/strong&gt;. This specifications ensure we stay in the Free Tier range. For more about RDS pricing, see &lt;a href="https://aws.amazon.com/rds/pricing/" rel="noopener noreferrer"&gt;Amazon RDS pricing&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Autoscaling should be enabled for unpredictable workloads but that is beyond the free tier usage capacity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzeikrkxuql0wzp7ozyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzeikrkxuql0wzp7ozyl.png" alt="Instance Specifications"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Under &lt;strong&gt;Connectivity&lt;/strong&gt;, one key thing to take note of is the VPC. &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html" rel="noopener noreferrer"&gt;Amazon virtual private cloud&lt;/a&gt; (VPC) makes it possible to launch AWS resources into a virtual private cloud such that you can create your own subnets, access control lists, choose your own IP address and configure routing. There is no additional cost in running the DB instance in a VPC and all new DB instances are created in the default VPC except explicitly changed. For this tutorial, we'll use the &lt;strong&gt;default&lt;/strong&gt; VPC.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;VPC security groups&lt;/strong&gt;, choose &lt;strong&gt;Create new&lt;/strong&gt;. This creates an inbound rule that allows connection from the IP address of the device you are currently using to the database created.&lt;/p&gt;

&lt;p&gt;Also, make sure your DB instance is &lt;strong&gt;Publicly Accessible&lt;/strong&gt; by setting this field to '&lt;strong&gt;Yes&lt;/strong&gt;' to make it possible to directly connect to the database from your device.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;RDS Proxy&lt;/strong&gt; allows your applications to pool and share database connections, hence improving their ability to scale. We will leave this option unchecked. To see more on the pricing for RDS proxy, visit &lt;a href="https://aws.amazon.com/rds/proxy/pricing/" rel="noopener noreferrer"&gt;Amazon RDS Proxy pricing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other settings are left as default for this configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09trmh45l1ydi1iq9c53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09trmh45l1ydi1iq9c53.png" alt="Connectivity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpay7dnov6o95bcefyusj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpay7dnov6o95bcefyusj.png" alt="Connectivity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzz5c4pmym0wdhc1yh61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzz5c4pmym0wdhc1yh61.png" alt="Connectivity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database Authentication&lt;/strong&gt; defines how you would want users to be authenticated before they can view the database. The options are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Password Authentication&lt;/strong&gt;; uses the database password only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Password and IAM database authentication&lt;/strong&gt;; uses the database password and the user's credentials  through IAM users and roles. This option is available only on MySQL and PostgreSQL engines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Password and Kerberos authentication&lt;/strong&gt;; uses the database password and kerberos authentication through AWS Managed Microsoft AD created with AWS Directory Service.&lt;/p&gt;

&lt;p&gt;For simplicity, choose the &lt;strong&gt;Password Authentication&lt;/strong&gt; method from the list of options.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkksloysitnz5t1ba7gbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkksloysitnz5t1ba7gbz.png" alt="passwords"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To stay within the free tier, leave the &lt;strong&gt;Enable Enhanced Monitoring&lt;/strong&gt; unchecked. Monitoring gives you metrics in real time for the OS that the DB instance is running on.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Additional Configuration&lt;/strong&gt; section, under Database options, the Initial Database Name is the name of your database on you DB instance. If a name is not provided, RDS does not create a database while creating (except for Oracle or PostgreSQL). We would use the default &lt;strong&gt;Option Group&lt;/strong&gt; and &lt;strong&gt;DB parameter group&lt;/strong&gt;. Sometimes, you might have the ability to create your own DB parameter group or Option Group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax7xwjveahulgxxstb6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax7xwjveahulgxxstb6d.png" alt="databaseOptions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;RDS creates a storage volume snapshot of the entire database instance, backing up the entire database instance and not just particular databases. &lt;br&gt;
Under &lt;strong&gt;Backup&lt;/strong&gt;, check the option to &lt;em&gt;Enable automated backups&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Backup retention period&lt;/em&gt; determines the number of days for which the automatic backups are kept, for this tutorial, set it to &lt;em&gt;1 day&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;If you would like to set a daily time range during which backups should occur, select &lt;em&gt;Choose a window&lt;/em&gt; option and set the time for your preferred window. If not, you can just select &lt;em&gt;No preference&lt;/em&gt; which is what we will do here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nfy75345gdf3drwsd53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nfy75345gdf3drwsd53.png" alt="backup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Turn on &lt;em&gt;Enable auto minor version update&lt;/em&gt; under &lt;strong&gt;Maintenance&lt;/strong&gt;. This will allow us to receive automatic updates when they become available. Just like in Backup, we can set the set a particular maintenance window or select no preference. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegta86prejnvrsfbz8o9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegta86prejnvrsfbz8o9.png" alt="maintenance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last thing we'll need to do is turn off &lt;em&gt;Enable deletion protection&lt;/em&gt; for this tutorial. Enabling this option will prevent you from accidentally deleting your database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoszi0txs0ii1oi0anbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoszi0txs0ii1oi0anbo.png" alt="deletion"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Voila! Select &lt;strong&gt;Create Database&lt;/strong&gt; to create the DB instance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknucf2l2ajyfaik4z056.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknucf2l2ajyfaik4z056.png" alt="create"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This might take a while depending on the instance class and storage allocated for the database. When the status changes to &lt;strong&gt;Available&lt;/strong&gt;, then your instance has been completed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7oarftg5n7s7zt0kkdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7oarftg5n7s7zt0kkdp.png" alt="creating"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this is creating, you can move on to the next step of setting up your MySQL environment on your local computer. &lt;/p&gt;
&lt;h2&gt;
  
  
  Connect to the MySQL Database
&lt;/h2&gt;

&lt;p&gt;In this step, we will connect to the MySQL database that was created using the MySQL CLI. Before moving on, make sure you have installed the MySQL CLI. The connection string is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -h  **_endpoint_**  -P  **_port_**  -u  **_user_**  -p

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;endpoint&lt;/strong&gt; and &lt;strong&gt;port&lt;/strong&gt; for the database can be found under the &lt;strong&gt;Configuration and Security&lt;/strong&gt; tab. The &lt;strong&gt;user&lt;/strong&gt; is the &lt;strong&gt;Master Username&lt;/strong&gt; that was set when creating the database which can be found under the &lt;strong&gt;Configuration&lt;/strong&gt; tab. &lt;br&gt;
When you click enter, you'll be prompted for a password, this would also be the &lt;strong&gt;Master password&lt;/strong&gt; which was also set upon creation of the DB instance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr1zgclu5fqwjv8xba1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr1zgclu5fqwjv8xba1j.png" alt="Connecting"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And Voila, you're connected to your database. If we display the list of databases, we would see some default databases that come with MySQL and also the database we created when creating the instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1klid82qnmk5so8q3aru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1klid82qnmk5so8q3aru.png" alt="Connected"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's how we connect to the Amazon RDS MySQL Database instance.&lt;br&gt;
Congratulations. Now you can insert data and run queries. You could also create new databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delete the DB Instance
&lt;/h2&gt;

&lt;p&gt;In order not to waste resources and incur cost over time, it is best we delete the DB instance if you are not using it. &lt;/p&gt;

&lt;p&gt;To do so, return to the RDS console. Choose &lt;strong&gt;Databases&lt;/strong&gt;, then select the instance you want to delete and under &lt;strong&gt;Actions&lt;/strong&gt; dropdown, select &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh96fn00c8bx6gcx18j5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh96fn00c8bx6gcx18j5.png" alt="Find Delete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You would be asked if you want to create a final snapshot of your database. Select this option if you would like to restore the current state of your database with all its data in future. For this tutorial, we would leave that option unchecked. &lt;/p&gt;

&lt;p&gt;Confirm that you want to delete the instance and then click &lt;strong&gt;Delete&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvrlxyt5yxssdciyblru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvrlxyt5yxssdciyblru.png" alt="Delete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Performing these actions would permanently delete the database instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we have seen what an Amazon RDS database is and why it is preferred to manually setting up your own database. We also looked at the various engines that are supported by RDS and focused on using the MySQL engine to create a MySQL database instance on the RDS service. Then, we were able to connect this remote instance to our local computer and run basic queries. Finally, we saw how to permanently delete the database instance.&lt;/p&gt;




&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;If you restart your network, the VPC security group that created the inbound function permitting you to connect to the instance from your computer might not work anymore because your computer's IP address might have changed. &lt;/p&gt;

&lt;p&gt;To fix this, select your database and under &lt;strong&gt;Connectivity and Security&lt;/strong&gt; select your VPC security group under &lt;strong&gt;VPC security groups&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr03rqoeizd2pw83dxxp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr03rqoeizd2pw83dxxp4.png" alt="connectivity and security"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This would take you to the Security Groups page where you can see your different security groups, the outbound and inbound rules and configure them. Inbound rules define what resources are permitted to connect to the instance and Outbound rules define what resources the instance is permitted to make a connection with. &lt;br&gt;
From here, we can go ahead and edit our inbound rules in order to permit our device to connect with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ob0eloakpiyuz0y9e3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ob0eloakpiyuz0y9e3j.png" alt="edit inbound rule"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only thing you will need to edit here is, under &lt;strong&gt;Sources&lt;/strong&gt; of the rule, select &lt;strong&gt;My IP&lt;/strong&gt; to change the IP address the resource is allowed to connect to, to your device's IP address. Click on save rules, and now, you will be able to connect like before. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqkn6x3p8pa237rdwij9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqkn6x3p8pa237rdwij9.png" alt="change and save rule"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>database</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
