<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ademola Akinsola</title>
    <description>The latest articles on DEV Community by Ademola Akinsola (@leviackerman).</description>
    <link>https://dev.to/leviackerman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/leviackerman"/>
    <language>en</language>
    <item>
      <title>Logging in Python: A Beginner's Guide</title>
      <dc:creator>Ademola Akinsola</dc:creator>
      <pubDate>Thu, 02 Feb 2023 06:00:39 +0000</pubDate>
      <link>https://dev.to/leviackerman/logging-in-python-a-beginners-guide-8d</link>
      <guid>https://dev.to/leviackerman/logging-in-python-a-beginners-guide-8d</guid>
      <description>&lt;p&gt;Logging is an important aspect of any application. It is the process of providing information about the happenings in your application. If done well, It can be useful for helping a developer pinpoint reasons for app crashes, errors, possible bugs, and areas of poor performance faster.&lt;/p&gt;

&lt;p&gt;In this article, we will learn how we can add logging into our python application. To follow this article, basic knowledge of python would be useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging components
&lt;/h2&gt;

&lt;p&gt;At a high level, logging in python has three main components. These are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Loggers&lt;/strong&gt; : These are python objects that allow us to define the log messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Handlers&lt;/strong&gt; : These are objects that can be attached to a logger. They decide how to process a log.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Formatters&lt;/strong&gt; : These are objects that can be attached to a handler. They help to format the way logs are displayed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Loggers
&lt;/h2&gt;

&lt;p&gt;Python makes it easy to quickly get started with logging. Let us create an &lt;code&gt;app.py&lt;/code&gt; file to see how it works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging

LOGGER = logging.getLogger('foo.bar')

LOGGER.debug('Nothing to see here, Just trying to Debug')
LOGGER.info('Hey, here is some information you might find useful')
LOGGER.warning('something may be wrong here!!! warning')
LOGGER.error('An error definetely just happened. You might want to take a look!')
LOGGER.critical('Oops.. Critical server error.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we execute the file with &lt;code&gt;python3 app.py&lt;/code&gt; on our terminal, we will get the below in our terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;something may be wrong here!!! warning
An error definetely just happened. You might want to take a look!
Oops.. Critical server error.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use the logging module provided by python to create a &lt;code&gt;Logger&lt;/code&gt; object and log messages onto our terminal.&lt;/p&gt;

&lt;p&gt;But wait🤨, we only see three of the five logs we wrote, what's happening? Well, this is because of how logs are propagated. Python provides us with a default &lt;strong&gt;root&lt;/strong&gt; logger which is the topmost logger. By default, loggers in python follow a hierarchical order, where logs are propagated upwards to their parents till the log reaches the root logger.&lt;/p&gt;

&lt;p&gt;A logger named &lt;code&gt;foo.bar.baz&lt;/code&gt; would be a child of a logger created at &lt;code&gt;foo.bar&lt;/code&gt; and &lt;code&gt;foo&lt;/code&gt;, and hence, will send logs upwards to them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4t3co2rnpav52j2ppwkz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4t3co2rnpav52j2ppwkz.png" alt="Propagation hierarchy of logs" width="629" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, the root logger has what we call an effective level of "&lt;em&gt;warning"&lt;/em&gt;. An effective level is what a logger uses to decide if it will consider processing a log message. If a logger has an effective level of "&lt;em&gt;warning&lt;/em&gt;", it will only consider processing logs of priority "warning" and above, a log message of priority "debug" or "info" will be ignored. This is exactly why we only saw three of our five logs being printed above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1ms61c9z594cln1so59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1ms61c9z594cln1so59.png" alt="Filtering out of logs due to logger effective level" width="800" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, we have the following priority levels provided.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa82p3gp72bo0m5b7fr30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa82p3gp72bo0m5b7fr30.png" alt="Logger priority levels" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All loggers, except the root logger, are created with an effective level &lt;code&gt;NOTSET&lt;/code&gt;. An effective level of &lt;code&gt;NOTSET&lt;/code&gt; means the logger will delegate the decision of logging this message to its parent logger.&lt;/p&gt;

&lt;p&gt;If we would like to log messages with a priority lower than that of our root logger's effective level. We need to explicitly create a handler for our logger.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handlers
&lt;/h2&gt;

&lt;p&gt;Handlers assist in the decision-making of whether to process a log message based on the log's priority and its own specified priority level. To create one, edit your &lt;em&gt;app.py&lt;/em&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging

LOGGER = logging.getLogger('foo.bar')

stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG)

LOGGER.addHandler(stream_handler)
LOGGER.setLevel(logging.DEBUG)

LOGGER.debug('Nothing to see here, Just trying to debug')
LOGGER.info('Hey, here is some information you might find useful')
LOGGER.warning('something may be wrong here!!! warning')
LOGGER.error('An error definitely just happened. You might want to take a look!')
LOGGER.critical('Oops.. Critical server error.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We added a handler named &lt;code&gt;StreamHandler&lt;/code&gt; to our logger, this handler will output the logs it receives to the terminal (by default, &lt;em&gt;stderr&lt;/em&gt;). We set its level to &lt;code&gt;DEBUG&lt;/code&gt; to allow it to process messages of priority "DEBUG" and higher. We also set the effective level of the logger object to "DEBUG", since, if you recall, all loggers (except root) have a default effective level of "NOTSET" making them unable to act on a log by themselves, and will propagate the log upwards.&lt;/p&gt;

&lt;p&gt;Now if we execute our app.py script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Nothing to see here, Just trying to debug
Hey, here is some information you might find useful
something may be wrong here!!! warning
An error definitely just happened. You might want to take a look!
Oops.. Critical server error.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! 😎 We see all our logs now. You can tweak the priority levels of both the logger and a handler to get your desired output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using a FileHandler
&lt;/h3&gt;

&lt;p&gt;We can also have multiple handlers on a single logger. Another common handler is the &lt;code&gt;FileHandler&lt;/code&gt;. Let's see it in action, update your app.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging

LOGGER = logging.getLogger('foo.bar')

stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG)

LOGGER.addHandler(stream_handler)
LOGGER.setLevel(logging.DEBUG)
# ------ new lines ---------
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)
LOGGER.addHandler(file_handler)
# --------------------------

LOGGER.debug('Nothing to see here, Just trying to debug')
LOGGER.info('Hey, here is some information you might find useful')
LOGGER.warning('something may be wrong here!!! warning')
LOGGER.error('An error definetely just happened. You might want to take a look!')
LOGGER.critical('Oops.. Critical server error.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute the script and you should see a new file named &lt;code&gt;app.log&lt;/code&gt; appear in your working directory. Inspecting it, you should see the logs written into it.&lt;/p&gt;

&lt;p&gt;It is common practice to add multiple handlers with different priority levels to a handler. You might want all logs printed into the terminal using a &lt;code&gt;StreamHandler&lt;/code&gt; but only want &lt;strong&gt;error&lt;/strong&gt; and &lt;strong&gt;critical&lt;/strong&gt; logs written to your log files. This can easily be achieved by setting the StreamHandler level to DEBUG and the FIleHandler's level to "ERROR".&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating the RotatingFileHandler
&lt;/h3&gt;

&lt;p&gt;For large services with lots of logs being emitted, log files tend to get very large very quickly and this can be a problem as it gets harder to manage and can end up taking up a lot of the space on the machine. To solve this, we use a &lt;code&gt;RotatingFileHandler&lt;/code&gt;. After our log file reaches a specified size, it will stop logging into the file and rename it usually appending an increasing integer value at the end of the file name, then creating an empty log file of the original name and resuming logging into that. So you will tend to see log files like &lt;code&gt;app.log, app.log.1, app.log.2, app.log.3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Update the app.py file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging
from logging.handlers import RotatingFileHandler # new

LOGGER = logging.getLogger('foo.bar')

stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG)

LOGGER.addHandler(stream_handler)
LOGGER.setLevel(logging.DEBUG)

file_handler = RotatingFileHandler('app.log', maxBytes=100, backupCount=5) # updated
file_handler.setLevel(logging.DEBUG)
LOGGER.addHandler(file_handler)

LOGGER.debug('Nothing to see here, Just trying to debug')
LOGGER.info('Hey, here is some information you might find useful')
LOGGER.warning('something may be wrong here!!! warning')
LOGGER.error('An error definetely just happened. You might want to take a look!')
LOGGER.critical('Oops.. Critical server error.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We imported the &lt;code&gt;RotatingFileHandler&lt;/code&gt; class and then instantiated it with the file name and two arguments&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;maxBytes&lt;/strong&gt; : specifies the size the file must reach before we carry out a log rotation. We use a low value here to easily see the rotation in action, usually, it would be much higher, edit it to suit your needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;backupCount&lt;/strong&gt; : specifies the number of logs to keep before beginning to delete the oldest logs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Execute the &lt;em&gt;app.py&lt;/em&gt; script, you should see about two new log files in your working directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc9my78keged2b5ze7o6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc9my78keged2b5ze7o6.png" alt="Log files" width="367" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When app.log got filled, it was renamed as app.log.1, then a new app.log was made, when that &lt;em&gt;new&lt;/em&gt; app.log got filled, it became app.log.1 and the previous app.log.1 became app.log.2.&lt;/p&gt;

&lt;h2&gt;
  
  
  Formatters
&lt;/h2&gt;

&lt;p&gt;Formatters are objects that can be attached to a handler. They define the way logs should be displayed. Let us take an example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging
from logging.handlers import RotatingFileHandler

LOGGER = logging.getLogger('foo.bar')

stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG)

LOGGER.addHandler(stream_handler)
LOGGER.setLevel(logging.DEBUG)

file_handler = RotatingFileHandler('app.log', maxBytes=100, backupCount=5)
file_handler.setLevel(logging.DEBUG)
LOGGER.addHandler(file_handler)

# ------ new lines --------
formatter = logging.Formatter(fmt='%(asctime)s: %(name)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
file_handler.setFormatter(formatter)
stream_handler.setFormatter(formatter)

LOGGER.debug('Nothing to see here, Just trying to debug')
LOGGER.info('Hey, here is some information you might find useful')
LOGGER.warning('something may be wrong here!!! warning')
LOGGER.error('An error definetely just happened. You might want to take a look!')
LOGGER.critical('Oops.. Critical server error.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We created a formatter and specified how to display our logs. We then attached it to the file and stream handler. Let us execute the script to see the result.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2023-01-31 00:44:08: foo.bar - DEBUG - Nothing to see here, Just trying to Debug
2023-01-31 00:44:08: foo.bar - INFO - Hey, here is some information you might find useful
2023-01-31 00:44:08: foo.bar - WARNING - something may be wrong here!!! warning
2023-01-31 00:44:08: foo.bar - ERROR - An error definetely just happened. You might want to take a look!
2023-01-31 00:44:08: foo.bar - CRITICAL - Oops.. Critical server error

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, new information has been added to our logs and it resembles the structure we defined for the formatter with the &lt;code&gt;fmt&lt;/code&gt; argument.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tidy up logging with configurations
&lt;/h2&gt;

&lt;p&gt;Python provides us with several ways of configuring all or most of our loggers in one place. By providing a dictionary configuration we can specify which loggers should exist, the handlers to be attached, as well as formatters. Let us see how this is done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging with DictConfig
&lt;/h3&gt;

&lt;p&gt;In a new file called &lt;code&gt;logging_config.py&lt;/code&gt; paste the following code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LOGGING_CONFIG = { 
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': { 
        'basic': {
            'format': '%(asctime)s: %(name)s - %(levelname)s - %(message)s',
            'datefmt' : '%Y-%m-%d %H:%M:%S'
        },
    },
    'handlers': { 
        'stream_handler': { 
            'level': 'DEBUG',
            'formatter': 'basic',
            'class': 'logging.StreamHandler',
        },
        'rotating_file_handler': { 
            'level': 'ERROR',
            'formatter': 'basic',
            'class': 'logging.handlers.RotatingFileHandler',
            'backupCount': 5,
            'maxBytes': 100,
            'filename': 'app.log'
        },
    },
    'loggers': { 
        '': {
            'handlers': ['stream_handler'],
            'level': 'WARNING',
            'propagate': False
        },
        'app': { 
            'handlers': ['stream_handler', 'rotating_file_handler'],
            'level': 'DEBUG',
            'propagate': False
        },
    } 
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The config file is very similar to what we have been doing with logging helper methods like &lt;code&gt;setFormatter&lt;/code&gt; and &lt;code&gt;addHandler&lt;/code&gt;. Dissecting this config dictionary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We have one formatter here defined, named as " &lt;strong&gt;basic".&lt;/strong&gt; We specify our log format with the &lt;code&gt;format&lt;/code&gt; key and our date format with the &lt;code&gt;datefmt&lt;/code&gt; key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We define two handlers, stream_handler and rotating_file_handler, similar to what we have done before. We set the priority level of "DEBUG" for the stream handler and "ERROR" for the rotating file handler.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, we define our loggers. The empty string key is synonymous with saying 'root'. We give our root logger the &lt;em&gt;stream_handler&lt;/em&gt; handler we defined earlier. Then define a new logger named &lt;strong&gt;app&lt;/strong&gt; and give it both handlers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We set propagate key to false since we explicitly defined handlers for our loggers and we don't want duplicated logging from loggers higher up on the hierarchy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now in our app.py file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging
import logging.config
from logger_config import LOGGING_CONFIG

logging.config.dictConfig(LOGGING_CONFIG)

LOGGER = logging.getLogger('app')

LOGGER.debug('Nothing to see here, Just trying to debug')
LOGGER.info('Hey, here is some information you might find useful')
LOGGER.warning('something may be wrong here!!! warning')
LOGGER.error('An error definetely just happened. You might want to take a look!')
LOGGER.critical('Oops.. Critical server error.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, our app file looks a lot cleaner. Now, run the script again.&lt;/p&gt;

&lt;p&gt;You should notice new logs in both the terminal and the log files. If you look closely, you should realize all logs are displayed on the console, but only "ERROR" and "CRITICAL" logs are shown in our log file. This is because of how we defined our &lt;code&gt;LOGGING_CONFIG&lt;/code&gt;, "stream_handler" has its effective level set to "DEBUG" and "rotating_file_handler" has its effective level as "ERROR".&lt;/p&gt;

&lt;p&gt;...And that's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we learned the importance of logging in python, the components of logging in python, and how to create and use our own loggers. We also saw how to specify new formats for our logs, and how to send logs to different outputs like console/terminal and files.&lt;/p&gt;




&lt;p&gt;If you found this article useful or learned something new, consider dropping a heart up and following me to keep up-to-date with any recent postings!&lt;/p&gt;

&lt;p&gt;You can also find me on Twitter at &lt;a href="https://mobile.twitter.com/akinsola232" rel="noopener noreferrer"&gt;&lt;strong&gt;akinsola232&lt;/strong&gt;&lt;/a&gt; and on LinkedIn at &lt;a href="https://www.linkedin.com/in/ademola-akinsola-3191571a2/?originalSubdomain=ng" rel="noopener noreferrer"&gt;&lt;strong&gt;Ademola&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Till next time, happy coding!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Signed.... Humanity's Strongest Soldier&lt;/em&gt;, Levi🧑💻👋.&lt;/p&gt;

</description>
      <category>python</category>
      <category>logging</category>
    </item>
    <item>
      <title>How to download data in multiple file formats (CSV, XLS, TXT) with Django REST Framework</title>
      <dc:creator>Ademola Akinsola</dc:creator>
      <pubDate>Wed, 25 Jan 2023 12:29:19 +0000</pubDate>
      <link>https://dev.to/leviackerman/how-to-download-data-in-multiple-file-formats-csv-xls-txt-with-django-rest-framework-gi3</link>
      <guid>https://dev.to/leviackerman/how-to-download-data-in-multiple-file-formats-csv-xls-txt-with-django-rest-framework-gi3</guid>
      <description>&lt;p&gt;A lot of the time, our typical server response is in the form of JSON or XML. This serves our use cases a good number of times, however, there are times when the need to provide data in the form of a file arises.&lt;/p&gt;

&lt;p&gt;In this article, we will be exploring how to convert our model data into files and send them as responses in Django REST Framework (DRF). We will do this by building out a simple project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You should have some basic Django and Django REST Framework knowledge. You should also have python already installed on your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The project, Setting up and Installation
&lt;/h2&gt;

&lt;p&gt;We will be building out a simple Student Management App. The purpose of the app will be to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Allow a user to input student data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download all the available data as either a CSV, Excel or TXT file.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating the project and Installing Dependencies
&lt;/h3&gt;

&lt;p&gt;We will install Django and DRF to start. To do this, first, create your virtual environment and activate it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m venv studentappenv
source studentappenv/bin/activate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now install Django and DRF. We will also be installing &lt;code&gt;openpyxl&lt;/code&gt; which is a python package that helps when interacting with data and excel files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install django djangorestframework openpyxl

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create the Django project and create an app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;django_admin startproject student_management .
python manage.py startapp student_data

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update your settings.py and include the new app inside the &lt;code&gt;INSTALLED_APPS&lt;/code&gt; list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    'student_data', #new line
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create our model. Head over to &lt;code&gt;student_data/models.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.db import models

class StudentData(models.Model):

    STUDENT_GENDER = ((1, 'Male'), (2, 'Female'), (3, 'Other'))
    STUDENT_LEVEL = ((1, 'Junior'), (2, 'Senior'))

    first_name = models.CharField(max_length=100)
    last_name = models.CharField(max_length=100)
    age = models.IntegerField()
    gender = models.IntegerField(choices=STUDENT_GENDER)
    level = models.IntegerField(choices=STUDENT_LEVEL)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A pretty simple model that stores basic student information. This will be good enough for our needs.&lt;/p&gt;

&lt;p&gt;Next up, we create our serializer. Create a new file &lt;code&gt;serializers.py&lt;/code&gt; inside the &lt;code&gt;student_data&lt;/code&gt; folder and add the following inside.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from rest_framework import serializers
from .models import StudentData

class StudentDataSerializer(serializers.ModelSerializer):

    class Meta:
        model = StudentData
        fields = ' __all__'

    def to_representation(self, instance):
        data = super().to_representation(instance)

        data['gender'] = instance.get_gender_display()
        data['level'] = instance.get_level_display()

        return data

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is also a basic serializer for our &lt;code&gt;StudentData&lt;/code&gt; model. We override our &lt;code&gt;to_representation&lt;/code&gt; to change how the server would return values for the &lt;code&gt;gender&lt;/code&gt; and &lt;code&gt;level&lt;/code&gt; fields. Instead of returning something like &lt;code&gt;'gender': '1'&lt;/code&gt;, we will instead get &lt;code&gt;'gender': 'Male'&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We have our model and serializer, now, let's create our view. Inside &lt;code&gt;student_data/views.py&lt;/code&gt;, add the following lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.utils import timezone
from rest_framework.response import Response

from rest_framework.viewsets import ModelViewSet
from rest_framework.decorators import action
from .models import StudentData
from .serializers import StudentDataSerializer

class StudentDataViewset(ModelViewSet):
    serializer_class = StudentDataSerializer
    queryset = StudentData.objects.all()

    @action(detail=False, methods=["get"])
    def download(self, request):
        queryset = self.get_queryset()
        serializer = StudentDataSerializer(queryset, many=True)

        return Response(serializer.data)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is our view which will handle &lt;code&gt;POST&lt;/code&gt; requests to create our student data. We have also defined a new viewset action &lt;code&gt;download&lt;/code&gt; which we will use for downloading our file responses. Right now it will simply return the data as a JSON response, we will tweak it soon to meet our needs.&lt;/p&gt;

&lt;p&gt;Let's connect our URL paths. Create a &lt;code&gt;urls.py&lt;/code&gt; file in the &lt;code&gt;student_data&lt;/code&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# student_data/urls.py

from .views import StudentDataViewset
from django.urls import path, include

from rest_framework import routers

router = routers.DefaultRouter()
router.register("", StudentDataViewset, basename="student-data")

urlpatterns = [
    path("", include(router.urls)),
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect this to the main urls file at &lt;code&gt;student_management&lt;/code&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# student_management/urls.py
from django.contrib import admin
from django.urls import path, include

urlpatterns = [
    path('admin/', admin.site.urls),
    path('student-data/', include('student_data.urls')),
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run makemigrations and migrate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py makemigrations
python manage.py migrate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the server with &lt;code&gt;python manage.py runserver&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Make a few &lt;code&gt;POST&lt;/code&gt; requests at &lt;code&gt;http://127.0.0.1:8000/student-data/&lt;/code&gt; to populate our database with some data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674396859058%2Fa6780870-75d5-4b36-8496-59e5f727b1f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674396859058%2Fa6780870-75d5-4b36-8496-59e5f727b1f6.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now test the &lt;code&gt;http://127.0.0.1:8000/student-data/download/&lt;/code&gt; endpoint with a &lt;code&gt;GET&lt;/code&gt; request to see the response type. You will get JSON data as a response similar to what we saw from our &lt;code&gt;POST&lt;/code&gt; request. This is because DRF sets a global default renderer &lt;code&gt;rest_framework.renderers.JSONRenderer&lt;/code&gt; which will return server responses in &lt;code&gt;application/json&lt;/code&gt; media type.&lt;/p&gt;

&lt;p&gt;We will create our own renderers which will convert our data into the response formats we want.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;renderers.py&lt;/code&gt; file inside the &lt;code&gt;student_data&lt;/code&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# student_data/renderers.py
import io
import csv
from rest_framework import renderers

STUDENT_DATA_FILE_HEADERS = ["id", "first_name", "last_name", "age", "gender", "level"]

class CSVStudentDataRenderer(renderers.BaseRenderer):

    media_type = "text/csv"
    format = "csv"

    def render(self, data, accepted_media_type=None, renderer_context=None):

        csv_buffer = io.StringIO()
        csv_writer = csv.DictWriter(csv_buffer, fieldnames=STUDENT_DATA_FILE_HEADERS, extrasaction="ignore")
        csv_writer.writeheader()

        for student_data in data:
            csv_writer.writerow(student_data)

        return csv_buffer.getvalue()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is going to be the renderer that will convert our data into CSV format. Let's break this code down.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We set a &lt;code&gt;STUDENT_DATA_FILE_HEADERS&lt;/code&gt; list which contains all our model fields.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Our &lt;code&gt;CSVStudentDataRenderer&lt;/code&gt;class subclasses &lt;code&gt;BaseRenderer&lt;/code&gt; and defines the &lt;code&gt;media_type&lt;/code&gt; and &lt;code&gt;format&lt;/code&gt; attributes. We have many different media types which you can find here at &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types" rel="noopener noreferrer"&gt;IANA defined media types (mime types)&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We define a render method where a bulk of the work is done. Inside it:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We create a string buffer named &lt;code&gt;csv_buffer&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We create an instance of &lt;code&gt;csv.DictWriter&lt;/code&gt; and pass in our buffer as an argument.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We call &lt;code&gt;csv_writer.writeheader()&lt;/code&gt; which will write the values in &lt;code&gt;STUDENT_DATA_FILE_HEADERS&lt;/code&gt; as the first line of our CSV file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We use the &lt;code&gt;csv&lt;/code&gt; helper module to write our data as comma separated values in the buffer location.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We retrieve the content written into our buffer with &lt;code&gt;csv_buffer.getvalue()&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's add renderers for Excel and Text files next. Update &lt;code&gt;renderers.py&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import io
import csv
import openpyxl # new
from rest_framework import renderers

STUDENT_DATA_FILE_HEADERS = ["id", "first_name", "last_name", "age", "gender", "level"]

...

# ------ NEW LINES --------
class TextStudentDataRenderer(renderers.BaseRenderer):

    media_type = "text/plain"
    format = "txt"

    def render(self, data, accepted_media_type=None, renderer_context=None):

        text_buffer = io.StringIO()
        text_buffer.write(' '.join(header for header in STUDENT_DATA_FILE_HEADERS) + '\n')

        for student_data in data:
            text_buffer.write(' '.join(str(sd) for sd in list(student_data.values())) + '\n')

        return text_buffer.getvalue()

class ExcelStudentDataRenderer(renderers.BaseRenderer):

    media_type = "application/vnd.ms-excel"
    format = "xls"

    def render(self, data, accepted_media_type=None, renderer_context=None):    

        workbook = openpyxl.Workbook()
        buffer = io.BytesIO()
        worksheet = workbook.active
        worksheet.append(STUDENT_DATA_FILE_HEADERS)

        for student_data in data:
            worksheet.append(list(student_data.values()))

        workbook.save(buffer)

        return buffer.getvalue()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The idea is the same as our &lt;code&gt;CSVStudentDataRenderer&lt;/code&gt;. We define the right &lt;code&gt;media_type&lt;/code&gt; and &lt;code&gt;format&lt;/code&gt; of our expected file. We then create a buffer which we write our headers and model data into. After that, we return the content of the buffer. In the case of our &lt;code&gt;ExcelRenderer&lt;/code&gt;, we use the &lt;code&gt;openpyxl&lt;/code&gt; library to simplify the buffer writing process.&lt;/p&gt;

&lt;p&gt;Now that we have our renderers, let's update our view to use it. Update your &lt;code&gt;student_data/views.py&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.utils import timezone
from rest_framework.response import Response

from rest_framework.viewsets import ModelViewSet
from rest_framework.decorators import action
from .models import StudentData
from .renderers import CSVStudentDataRenderer, ExcelStudentDataRenderer, TextStudentDataRenderer
from .serializers import StudentDataSerializer

class StudentDataViewset(ModelViewSet):
    serializer_class = StudentDataSerializer
    queryset = StudentData.objects.all()

    @action(detail=False, methods=["get"], renderer_classes=[CSVStudentDataRenderer, ExcelStudentDataRenderer, TextStudentDataRenderer])
    def download(self, request):
        queryset = self.get_queryset()

        now = timezone.now()        
        file_name = f"student_data_archive_{now:%Y-%m-%d_%H-%M-%S}.{request.accepted_renderer.format}"
        serializer = StudentDataSerializer(queryset, many=True)
        return Response(serializer.data, headers={"Content-Disposition": f'attachment; filename="{file_name}"'})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The three main takeaways from the new changes are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We define a renderers_classes parameter in our &lt;code&gt;@action&lt;/code&gt; decorator. This will be used by the &lt;code&gt;download&lt;/code&gt; endpoint when choosing a renderer. How will it decide? we will talk about that in a minute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We create a distinct file name based on the current time and append the format of the renderer which would be used to serve the particular response, found at &lt;code&gt;request.accepted_renderer.format&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We set a &lt;code&gt;content-dispostion&lt;/code&gt; header which lets the client know that the response should be treated as an attachment that should be downloaded under the value of &lt;code&gt;filename&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's head back to postman to test our &lt;code&gt;download&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674420838435%2F38385d50-efb2-4b8d-b51a-273ad4330b31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674420838435%2F38385d50-efb2-4b8d-b51a-273ad4330b31.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I get a CSV file downloaded onto my system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674403494728%2F15a6ee8e-d8c1-4a49-ba90-fb26b03f4904.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674403494728%2F15a6ee8e-d8c1-4a49-ba90-fb26b03f4904.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great. This means Django decided to use our &lt;code&gt;CSVRenderer&lt;/code&gt;, but how did it decide? Django has a content negotiation mechanism which it uses to determine how to render a response back to the client. It looks into the &lt;code&gt;Accept&lt;/code&gt; header in the request and tries to map it to an available renderer. If it is unable to map it to any specific one, by default, Django chooses the first renderer in the &lt;code&gt;renderers_classes&lt;/code&gt; list. This is why we get a CSV file in return.&lt;/p&gt;

&lt;p&gt;Let's edit our &lt;code&gt;Accept&lt;/code&gt; request header and specify the Excel media type(&lt;code&gt;application/vnd.ms-excel&lt;/code&gt;) then retry the request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674404258706%2Fb35258b5-2767-4e78-89f8-adf9b32c1fdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674404258706%2Fb35258b5-2767-4e78-89f8-adf9b32c1fdf.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I now get an Excel file downloaded.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674404697656%2Fa29527d4-0c44-4c99-a781-a8d359148ac9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1674404697656%2Fa29527d4-0c44-4c99-a781-a8d359148ac9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if we update the &lt;code&gt;Accept&lt;/code&gt; header to &lt;code&gt;text/plain&lt;/code&gt; we get a txt file downloaded as Django chooses the text renderer class.&lt;/p&gt;

&lt;p&gt;That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we learned how to create our own custom renderers for specific media types and formats like excel, CSV and text. We also saw how to help our application pick the right renderer using the &lt;code&gt;Accept&lt;/code&gt; request header.&lt;/p&gt;




&lt;p&gt;If you found this article useful or learned something new, consider dropping a heart up and following me to keep up-to-date with any recent postings!&lt;/p&gt;

&lt;p&gt;You can also find me on Twitter at &lt;a href="https://mobile.twitter.com/akinsola232" rel="noopener noreferrer"&gt;akinsola232&lt;/a&gt; and on LinkedIn at &lt;a href="https://www.linkedin.com/in/ademola-akinsola-3191571a2/?originalSubdomain=ng" rel="noopener noreferrer"&gt;Ademola&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Till next time, happy coding!&lt;/p&gt;

&lt;p&gt;Levi&lt;/p&gt;

</description>
      <category>python</category>
      <category>csv</category>
      <category>django</category>
      <category>excel</category>
    </item>
    <item>
      <title>Introduction to Elastic Beanstalk with Django, RDS, Docker and Nginx</title>
      <dc:creator>Ademola Akinsola</dc:creator>
      <pubDate>Thu, 19 Jan 2023 17:37:47 +0000</pubDate>
      <link>https://dev.to/leviackerman/introduction-to-elastic-beanstalk-with-django-rds-docker-and-nginx-3p3d</link>
      <guid>https://dev.to/leviackerman/introduction-to-elastic-beanstalk-with-django-rds-docker-and-nginx-3p3d</guid>
      <description>&lt;p&gt;In this article, we will learn about Elastic Beanstalk and its capabilities. We will understand the problem that Elastic Beanstalk solves and the steps required to set up and deploy a Django app with Elastic Beanstalk.&lt;/p&gt;

&lt;p&gt;We will also be using Docker for containerizing the app and Nginx as a reverse proxy server. Additionally, we will learn how to connect an RDS instance with our Beanstalk application.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Elastic Beanstalk?
&lt;/h2&gt;

&lt;p&gt;Elastic Beanstalk (EB) is a managed AWS service that allows you to upload, deploy and manage your applications easily. It deals with the provisioning of resources needed for the application such as EC2 instances, Cloudwatch for logs, Auto Scaling Groups, Load Balancers, Databases, and Proxy Servers (Nginx, Apache) - all of which are customizable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the bare project
&lt;/h2&gt;

&lt;p&gt;To focus on the main theme of this article, Elastic Beanstalk, we will be cloning a very simple Django app where we make a post request to add some inventory data into our database and a get request to retrieve them. Head over to &lt;a href="https://github.com/shols232/ebs-django-docker-tutorial" rel="noopener noreferrer"&gt;this github repository&lt;/a&gt; and clone the app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Docker
&lt;/h2&gt;

&lt;p&gt;We will be using docker for containerization. This is to maintain consistency between the development environment and production environments and eliminate all forms of &lt;code&gt;But it works on my computer&lt;/code&gt; problems.&lt;/p&gt;

&lt;p&gt;For this, you should have docker and docker-compose already installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing our Dockerfile
&lt;/h3&gt;

&lt;p&gt;Let us create a Dokcerfile in the root directory. Copy the following into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9-bullseye

ARG PROJECT_DIR=/home/app/code
ENV PYTHONUNBUFFERED 1

WORKDIR $PROJECT_DIR

RUN useradd non_root &amp;amp;&amp;amp; chown -R non_root $PROJECT_DIR

RUN python -m pip install --upgrade pip
COPY requirements.txt $PROJECT_DIR
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . $PROJECT_DIR
RUN python manage.py collectstatic --noinput

EXPOSE 8000

USER non_root

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What are we doing here?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We specify our base image as &lt;code&gt;python:3.9-bullseye&lt;/code&gt; with the &lt;code&gt;FROM&lt;/code&gt;. This is, to put it simply, a Debian OS with python3.9 installed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We create a new directory where we will be moving our Django application code into.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We create a new user and give it ownership of the application code directory (This is to follow docker security best practices).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We install the app dependencies from requirements.txt and run &lt;code&gt;collectstatic&lt;/code&gt; to collect all our static files into a single directory path which we have already defined in our settings.py file as &lt;code&gt;STATIC_ROOT&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We EXPOSE port 8000 of the container. The proxy server we will create will forward requests to this port.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Writing our docker-compose file
&lt;/h3&gt;

&lt;p&gt;Now, Let us add a compose file. Create a docker-compose.yml file at the root directory and add the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  web:
    build:
      context: .
    command: sh -c "python manage.py migrate &amp;amp;&amp;amp;
                    gunicorn ebs_django.wsgi:application --bind 0.0.0.0:8000" 
    volumes: 
      - static_volume:/home/app/code/static
    env_file: 
      - .env
    image: web
    restart: "on-failure"
  nginx:
    build: 
      context: ./nginx
    ports: 
    - 80:80 
    volumes:  
      - static_volume:/home/app/code/static 
    depends_on:
      - web 

volumes:
  static_volume:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define two services here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WEB&lt;/strong&gt;: This is our application. It will be built based on instructions defined in our Dockerfile.&lt;br&gt;
Once built, we run &lt;code&gt;manage.py migrate&lt;/code&gt; to create our database schemas from our migration files. Then we bind gunicorn to the port 8000 of the machine. Gunicorn will serve our Django app through it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NGINX&lt;/strong&gt;: This is a web server we will be using as our reverse proxy. (A reverse proxy server is a server that sits in front of our application and routes incoming requests from external sources (users) to our provisioned servers.)&lt;br&gt;
We will also use Nginx to serve our static files. hence the reason for binding to the volume &lt;code&gt;static_volume&lt;/code&gt; which will be populated by our web container when we run &lt;code&gt;collectstatic&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, create a folder named Nginx and add a Dockerfille inside with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM nginx:1.23.3

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/nginx.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the Dockerfile our Nginx container will be created from. It just defines a base image and deletes the &lt;code&gt;default.conf&lt;/code&gt; provided by Nginx and adding a new one named &lt;code&gt;nginx.conf&lt;/code&gt;, which we will create now.&lt;/p&gt;

&lt;p&gt;Create the file and copy the following into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {

    listen 80 default_server; # default external port. Anything coming from port 80 will go through NGINX

    location / {
        proxy_pass http://web:8000;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }
    location /static/ {
        alias /home/app/code/static/; # Our static files
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is happening here?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Nginx server listens on the machine's port 80 for incoming requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requests received are forwarded to our Django (gunicorn) server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We define another &lt;code&gt;location&lt;/code&gt; directive &lt;code&gt;/static/&lt;/code&gt;, this is so that Nginx directly serves our static files as we mentioned earlier.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;NOTE: We manually set up Nginx as a container since EB does not create an Nginx proxy server for us when we use a &lt;code&gt;docker-compose.yml&lt;/code&gt; file, if we were to use a Dockerfile only, EB would provision an Nginx server for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing necessary dependencies
&lt;/h3&gt;

&lt;p&gt;We need to install some packages for our docker container. Since we will be using gunicorn to serve our Django app and we will be using PostgreSQL, we need to install psycopg2-binary as well as gunicorn. You can do that with the following command in your activated virtual environment on the shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(ebsenv) ebs-django-docker-tutorial git:(master) pip install psycopg2-binary gunicorn

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run pip freeze to update your &lt;code&gt;requirements.txt&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(ebsenv) ebs-django-docker-tutorial git:(master) pip freeze &amp;gt; requirements.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Integrating Elastic Beanstalk
&lt;/h2&gt;

&lt;p&gt;First off, let's understand two terms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Environment: This is a collection of all the AWS resources provisioned by EB to run your application code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application: This is a collection of environments, application code versions, and environment configurations. You can have multiple environments in a single application. For example, you could decide to have a production environment running a specific application code version, and a QA environment running another application code version.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alright, now, back to the code. EB provides us with several tools we can use to configure our environments and deploy our applications.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;EB CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS SDK (Boto for python)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EB Console&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We will be using a mix of both the CLI and the EB console. To install the cli, use the command below inside your virtual env.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(eb_python_testenv) EBS-TEST git:(master) pip install awsebcli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Head over to the &lt;a href="https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html" rel="noopener noreferrer"&gt;AWS console to get your AWS credentials&lt;/a&gt; (aws_access_key, aws_secret_access_key), rename &lt;code&gt;.env.example&lt;/code&gt; file to &lt;code&gt;.env&lt;/code&gt; and paste your credentials there.&lt;/p&gt;

&lt;p&gt;We can run &lt;code&gt;eb init&lt;/code&gt; to initialize some configurations (platform, region) for our application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(eb_python_testenv) EBS-TEST git:(master) eb init eb-docker-rds-django

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, EB provisions the resources in your application inside the &lt;code&gt;us-west-2 (US West (Oregon))&lt;/code&gt; region, you can choose a different region by picking its corresponding number in the CLI. For this tutorial, we will be using the &lt;strong&gt;us-east-1&lt;/strong&gt;. It asks if you are using Docker, choose Yes, Then Pick the option to use Docker when asked to select a platform. When prompted to use CodeCommit, pick no. CodeCommit is a Version Control Service and we are already using GitHub. When asked to set up SSH, choose no as we will not be needing it here.&lt;/p&gt;

&lt;p&gt;Once that's done, you will see a new &lt;code&gt;.elasticbeanstalk&lt;/code&gt; folder created for you with a &lt;code&gt;config.yml&lt;/code&gt; file created. It defines some configurations for when we eventually create our environment.&lt;/p&gt;

&lt;p&gt;Commit your changes with &lt;code&gt;git add .&lt;/code&gt; and &lt;code&gt;git commit -m "your commit message"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we create our environment with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(ebsenv) ebs-django-docker-tutorial git:(master) eb create eb-docker-rds-django

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;eb-docker-rds-django&lt;/code&gt; is our environment name. Wait for a couple minutes while EB provisions your resources for you. Once that is done, We can head over to the AWS EB console to see the status of our environment or we could use &lt;code&gt;eb status&lt;/code&gt; on our terminal directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbhkmdo9y3ktvzl52hfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbhkmdo9y3ktvzl52hfa.png" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see currently, the Health of our app is red. This is because in our settings.py file we are attempting to retrieve credentials for a database we do not have yet. You can confirm this by typing in &lt;code&gt;eb logs&lt;/code&gt; on your terminal and scrolling through the logs.&lt;/p&gt;

&lt;p&gt;Let us create an AWS Relational Database Service (RDS) for our EB environment.&lt;/p&gt;

&lt;p&gt;Head over to &lt;a href="http://console.aws.amazon.com/rds/home" rel="noopener noreferrer"&gt;AWS RDS console&lt;/a&gt;, click on databases and click &lt;code&gt;Create database&lt;/code&gt; . Next,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;code&gt;Standard create&lt;/code&gt; option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose PostgreSQL as your database engine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;code&gt;Credentials settings&lt;/code&gt;, type in a master username and a master password. (Write them down somewhere).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to additional configurations, expand, and under &lt;code&gt;Initial database name&lt;/code&gt;, type the name you want to use for your application database (Write it down).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave everything else the same and click &lt;code&gt;Create Database&lt;/code&gt; .&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once that is created successfully, we need to edit the security groups to let the EC2 instances of our EB environment access our RDS instance. To do this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on the newly created database. Scroll down to &lt;code&gt;Security group rules&lt;/code&gt; and click on the Inbound Security Group. Click on &lt;code&gt;Inbound Rules&lt;/code&gt; tab and click on &lt;code&gt;Edit inbound rules&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For type, choose Postgres.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the source, we will choose the security group attached to our EC2 instances by EB. To find the security group of your EC2 instances, you can simply head to Auto Scaling Group (ASG) Service, click on your provisioned ASG, scroll to &lt;code&gt;Launch Configurations&lt;/code&gt; and click on the security group, on the new page, you will see the &lt;code&gt;security group id&lt;/code&gt; of your ASG, copy it and paste it into the source search bar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;save rules&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We have created our database, now we need to add the environment variables inside our EB environment. EB provides us several ways to do that, including using the CLI with &lt;code&gt;eb setenv key=value&lt;/code&gt; or directly through the console. Let us head over to the &lt;a href="http://console.aws.amazon.com/elasticbeanstalk/home" rel="noopener noreferrer"&gt;EB console&lt;/a&gt; to do this. Under environments, click on your environment. On the left sidebar, click on configuration, Under &lt;code&gt;Software&lt;/code&gt; click Edit and scroll down to &lt;code&gt;Environment properties&lt;/code&gt;, and add your RDS credentials. Also, add your &lt;code&gt;DJANGO_SECRET_KEY&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm0j2lh221tcm0butcag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm0j2lh221tcm0butcag.png" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find your database hostname under the connectivity tab of the RDS instance.&lt;/p&gt;

&lt;p&gt;Next, we need to add our domain into the allowed_hosts settings of Django. Head over to the EB console, click on your environment and grab the URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjeppccohif4wgtaah76w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjeppccohif4wgtaah76w.png" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to your settings.py file and update the ALLOWED_HOSTS list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ebs_django/settings.py
# looks something like eb-docker-rds-django.xxx-xxxxxxxx.us-east-1.elasticbeanstalk.com
ALLOWED_HOSTS = ['YOUR_ENVIRONMENT_URL']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;commit your changes with &lt;code&gt;git add .&lt;/code&gt; and &lt;code&gt;git commit -m "your commit message"&lt;/code&gt; and deploy the new version with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(ebsenv) ebs-django-docker-tutorial git:(master) eb deploy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Head over to postman to test the app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b6gm0mih77hchjg82g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b6gm0mih77hchjg82g4.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It works, yay! I guess we're done... well, not quite. The EB CLI provides us with the tools necessary to run our environment locally, which usually would be great when doing development. We can do this with the command &lt;code&gt;eb local run&lt;/code&gt;. Unfortunately, &lt;a href="https://github.com/aws/aws-elastic-beanstalk-cli/issues/58" rel="noopener noreferrer"&gt;EB "local run" currently has no support for reading docker-compose.yml files&lt;/a&gt;, so what's a workaround?&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose for Development
&lt;/h2&gt;

&lt;p&gt;We can simply create a new docker-compose file to simulate our EB environment. Luckily, in our current setup, we really just need to add a database and we are set. Let's do that.&lt;/p&gt;

&lt;p&gt;Create a new docker-compose file called &lt;code&gt;docker-compose.dev.yml&lt;/code&gt; . This will be used specifically for development purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  web:
    build: 
      context: .
    command: sh -c "python manage.py makemigrations &amp;amp;&amp;amp; python manage.py migrate &amp;amp;&amp;amp;
                    gunicorn ebs_django.wsgi:application --bind 0.0.0.0:8000 --reload" # updated command
    volumes: 
      - ./:/home/app/code/ # new volume
      - static_volume:/home/app/code/static
    env_file: 
      - .env
    image: web
    depends_on:
      - db
    restart: "on-failure"

  nginx:
    build: 
      context: ./nginx
    ports: 
    - 80:80 
    volumes:  
      - static_volume:/home/app/code/static 
    depends_on:
      - web
  db: # new db service
    image: postgres:15-alpine 
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${RDS_PASSWORD}
      - POSTGRES_DB=${RDS_DB_NAME}
      - POSTGRES_USER=${RDS_USERNAME}
    ports:
      - 5432:5432

volumes:
  pgdata: # new line
  static_volume:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a few differences in this compose file compared with our initial one.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;the command in our web service now includes a &lt;code&gt;reload&lt;/code&gt; flag. This ensures gunicorn reloads the server after new source code changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A new volume in &lt;code&gt;web&lt;/code&gt; to make changes in our local files reflect in our container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A new db service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A new &lt;code&gt;pgdata&lt;/code&gt; volume to make sure our db data persists.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Update your .env file to include the credentials for our development DB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Develop
DJANGO_SECRET_KEY='secret_key'
DEBUG="1"
ALLOWED_HOSTS=*

# --- ADDED LINES -----
RDS_HOSTNAME=db
RDS_DB_NAME='test'
RDS_PASSWORD='test123'
RDS_USERNAME='test'
RDS_PORT=5432
# ---------------------
aws_access_key=AKIA ************* BXE
aws_secret_access_key=bheyut ******************* ywtRuUfChDL5r

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the development server with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ebs-django-docker-tutorial git:(master) docker-compose -f docker-compose.dev.yml up --build

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update your &lt;code&gt;ALLOWED_HOSTS&lt;/code&gt; in your settings.py file to include &lt;code&gt;127.0.0.1&lt;/code&gt; . Now head over to postman and test the development server with an empty POST request at &lt;code&gt;http://127.0.0.1/inventory-item&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6je5ws7wje9ept36puv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6je5ws7wje9ept36puv.png" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You now have a fair understanding of Elastic Beanstalk. In this article you have learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What Elastic Beanstalk is&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to integrate Nginx as a reverse proxy for your web server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to deploy your Django app into a beanstalk docker environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to integrate an RDS instance into your EB environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to mimic your remote EB environment into your development environment when using docker-compose.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you found this article useful or learned something new, consider leaving a thumbs up and following me to keep up-to-date with any recent postings!&lt;/p&gt;

&lt;p&gt;Till next time, happy coding!&lt;/p&gt;

&lt;p&gt;Levi&lt;/p&gt;

</description>
      <category>django</category>
      <category>aws</category>
      <category>cloud</category>
      <category>elasticbeanstalk</category>
    </item>
    <item>
      <title>Understanding Amazon SQS with Python and Django - Part 2</title>
      <dc:creator>Ademola Akinsola</dc:creator>
      <pubDate>Sat, 13 Aug 2022 17:58:36 +0000</pubDate>
      <link>https://dev.to/leviackerman/understanding-amazon-sqs-with-python-and-django-part-2-44ig</link>
      <guid>https://dev.to/leviackerman/understanding-amazon-sqs-with-python-and-django-part-2-44ig</guid>
      <description>&lt;p&gt;Hello👋, This is part 2 of my two-part series on Understanding Amazon SQS with Python and Django. This article assumes you have read the first article in the series; You can find that article at &lt;a href="https://dev.to/leviackerman/understanding-amazon-sqs-with-python-and-django-part-1-1hf7"&gt;Understanding Amazon SQS with Python and Django - Part 1&lt;/a&gt;. The corresponding code can be found at &lt;a href="https://github.com/shols232/understanding-amazon-sqs-with-python-and-django"&gt;this github repo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Learn how to consume messages from Amazon SQS with Python.&lt;/li&gt;
&lt;li&gt;Create an endpoint to keep track of our File Processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How the file processing service (Consumer) interacts with the queue
&lt;/h2&gt;

&lt;p&gt;There are a few things we should know about interacting with a queue when polling messages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We can have several consumers polling messages from a single queue at once.&lt;/li&gt;
&lt;li&gt;A Consumer must delete a message immediately after processing, to avoid another consumer from picking the same message up.&lt;/li&gt;
&lt;li&gt;There is something called a &lt;code&gt;VisibilityTimeout&lt;/code&gt;. This is the amount of time a consumer has to process and delete a message it has polled, to prevent other consumers from picking that same message up. When a message is polled by a consumer, that message is hidden from other consumers for the duration set in the &lt;code&gt;VisibilityTimeout&lt;/code&gt; option. Default is 30 seconds.&lt;/li&gt;
&lt;li&gt;You can learn more about the possibilities SQS provides us when polling messages in the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Client.receive_message"&gt;Official Docs&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creating service B - The file processing service
&lt;/h2&gt;

&lt;p&gt;Our service B is going to be a simple python script that runs infinitely and continually polls messages from our Queue to be processed. Let's create a new file &lt;code&gt;process_messages_from_queue.py&lt;/code&gt;; Since this file is not actually part of our "Django app", we will create it outside our Django app directory to simulate it being on a completely different server. Your File tree should now look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
 amazon-sqs-and-django
    core
    db.sqlite3
    file
    manage.py
    test_file.txt
    requirements.txt
    venv
 process_messages_from_queue.py

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, let's create a separate environment for our script and install everything it would need. In your terminal, run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amazon-sqs-and-django git:(part-2) python3 -m venv serviceb_venv             
amazon-sqs-and-django git:(part-2) source serviceb_venv/bin/activate
(serviceb_venv) amazon-sqs-and-django git:(part-2) pip install boto3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, inside the &lt;code&gt;process_messages_from_queue.py&lt;/code&gt; file,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import sqlite3
import boto3

session = boto3.Session(
    aws_access_key_id='&amp;lt;AWS_ACCESS_KEY_ID&amp;gt;', # replace with your key
    aws_secret_access_key='&amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt;', # replace with your key
)
sqs = session.resource('sqs', region_name='us-east-1')
connection = sqlite3.connect("path/to/django/app/db.sqlite3")

# Statuses we defined in the File Model of our django app.
PROCESSING_STATUS = 1
PROCESSED_STATUS = 2
FAILED_STATUS = 3

def main():

    # Get the SQS queue we created.
    queue = sqs.get_queue_by_name(QueueName="MyFileProcessingQueue.fifo")

    # This loop runs infinitely since we want to constantly keep checking
    # if new messages have been sent to the queue, and if they have, retrieve them and process them.
    while True:
        cursor = connection.cursor()
        # retrieve some messages from the queue.
        messages = queue.receive_messages()

        for message in messages:
            data = json.loads(message.body)
            file_id = data["file_id"]

            # Update File to indicate that it is now in processing stage.
            cursor.execute("UPDATE file_file SET status = ? WHERE id = ?", (PROCESSING_STATUS, file_id,))
            connection.commit()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We create an &lt;code&gt;sqs&lt;/code&gt; resource to interact with our queue. We also create a connection to our django app database; This is because we need to update the DB after processing the file. Then, we create a main function that will hold all of the important code. Inside the &lt;code&gt;main&lt;/code&gt; function, we:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retrieve the SQS queue and inside an infinite loop, continuously poll messages in the queue.&lt;/li&gt;
&lt;li&gt;For each message, we get the file_id which was sent from the django app.&lt;/li&gt;
&lt;li&gt;We use the file_id to update the &lt;code&gt;File&lt;/code&gt; object in the database, and set its status to &lt;code&gt;PROCESSING&lt;/code&gt;. This is to let the user know his file is no longer &lt;code&gt;PENDING&lt;/code&gt; but rather, that work has started on it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Things to note:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Right now, the &lt;code&gt;receive_messages&lt;/code&gt; method only retrieves one message at a time, this may help to prevent overloading the server with too many requests at once. You can increase that value to a maximum of &lt;code&gt;10&lt;/code&gt; by setting the &lt;code&gt;MaxNumberOfMessages&lt;/code&gt; argument.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can and should optimize the code to update the default of &lt;code&gt;VisibilityTimeout&lt;/code&gt; if the service is unable to process a particular file fast enough. This is to prevent another queue from picking it up before it is done. You can do this by calling the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Message.change_visibility"&gt;change_visibility&lt;/a&gt; method on a particular message.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, let's actually process the file, and update the database with that processed data. And if the processing fails, we update the database to reflect the failure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os # &amp;lt;----- NEW
...
def main():
    ...
    while True:
        ...
        for message in messages:
            ...
            # &amp;lt;------- ADD THIS --------&amp;gt;
            # Get the File obj from the database through the file_id we got from the SQS message.
            file_object = cursor.execute("SELECT lines_count, file_size, file_path, status \
                FROM file_file WHERE id = ?", (file_id,)).fetchone()

            # file_object order ------ (lines_count, file_size, file_path, status)
            file_path = file_object[2]

            # Checking the state of the File Object before we start processing...
            print(f"FILE ID: {file_id}, LINES_COUNT: {file_object[0]}, FILE_SIZE in bytes: {file_object[1]}, STATUS: {file_object[3]}")

            try:
                lines = None
                with open(file_path, "r", encoding="utf-8") as file:
                    lines = len(file.readlines())
                file_size = os.path.getsize(file_path)

                cursor.execute("UPDATE file_file SET status = ?, lines_count = ?, file_size = ? WHERE id = ?",
                    (PROCESSED_STATUS, lines, file_size, file_id,))
            except Exception:
                cursor.execute("""UPDATE file_file SET status = ? WHERE id = ?""",
                    (FAILED_STATUS, file_id,),
                )

            connection.commit()
            # Delete the message, to avoid duplicate processing
            message.delete()

            # check updated database
            updated_file_object = cursor.execute("SELECT lines_count, file_size, file_path, status \
                FROM file_file WHERE id = ?", (file_id,)).fetchone()
            print(f"FILE ID: {file_id}, LINES_COUNT: {updated_file_object[0]}, FILE_SIZE in bytes: {updated_file_object[1]}, STATUS: {updated_file_object[3]}")
            # &amp;lt;------- END--------&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A quick brief on what is going on here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We retrieve the file_path from the database, and get the number of lines in the file, as well as its size.&lt;/li&gt;
&lt;li&gt;We save this processed data inside the database.&lt;/li&gt;
&lt;li&gt;In the case an error occurs while processing, we update the database object to a &lt;code&gt;FAILED_STATUS&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Lastly, we delete the message, so no other consumer can pick up the message again.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The file processing action could be a bit more complex. Usually, it would be working on much larger files that take a lot longer to process, but for example purposes, we keep it simple.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, all that is left is to call our &lt;code&gt;main&lt;/code&gt; function when our script is run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import json
import sqlite3
import boto3
...
def main():
    ...

if __name__ == " __main__":
    main()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full code would now be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import json
import sqlite3
import boto3

session = boto3.Session(
    aws_access_key_id='&amp;lt;AWS_ACCESS_KEY_ID&amp;gt;', # replace with your key
    aws_secret_access_key='&amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt;', # replace with your key
)
sqs = session.resource('sqs', region_name='us-east-1')
connection = sqlite3.connect("path/to/django/app/db.sqlite3")

# Statuses we defined in the File Model of our django app.
PROCESSING_STATUS = 1
PROCESSED_STATUS = 2
FAILED_STATUS = 3

def main():

    # Get the SQS queue we created.
    queue = sqs.get_queue_by_name(QueueName="MyFileProcessingQueue.fifo")

    # This loop runs infinitely since we want to constantly keep checking
    # if new messages have been sent to the queue, and if they have, retrieve them and process them.
    while True:
        cursor = connection.cursor()
        # retrieve some messages from the queue.
        messages = queue.receive_messages()

        for message in messages:
            data = json.loads(message.body)
            file_id = data["file_id"]

            # Update File to indicate that it is now in processing stage.
            cursor.execute("UPDATE file_file SET status = ? WHERE id = ?", (PROCESSING_STATUS, file_id,))
            connection.commit()

            # Get the File obj from the database through the file_id we got from the SQS message.
            file_object = cursor.execute("SELECT lines_count, file_size, file_path, status \
                FROM file_file WHERE id = ?", (file_id,)).fetchone()

            # file_object order ------ (lines_count, file_size, file_path, status)
            file_path = file_object[2]

            # Checking the state of the File Object before we start processing...
            print(f"FILE ID: {file_id}, LINES_COUNT: {file_object[0]}, FILE_SIZE in bytes: {file_object[1]}, STATUS: {file_object[3]}")

            try:
                lines = None
                with open(file_path, "r", encoding="utf-8") as file:
                    lines = len(file.readlines())
                file_size = os.path.getsize(file_path)

                cursor.execute("UPDATE file_file SET status = ?, lines_count = ?, file_size = ? WHERE id = ?",
                    (PROCESSED_STATUS, lines, file_size, file_id,))
            except Exception:
                cursor.execute("""UPDATE file_file SET status = ? WHERE id = ?""",
                    (FAILED_STATUS, file_id,),
                )

            connection.commit()

            # Delete the message, to avoid duplicate processing
            message.delete()

            # check updated database
            updated_file_object = cursor.execute("SELECT lines_count, file_size, file_path, status \
                FROM file_file WHERE id = ?", (file_id,)).fetchone()
            print(f"FILE ID: {file_id}, LINES_COUNT: {updated_file_object[0]}, FILE_SIZE in bytes: {updated_file_object[1]}, STATUS: {updated_file_object[3]}")

if __name__ == " __main__":
    main()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the file processing service
&lt;/h2&gt;

&lt;p&gt;Inside your terminal run the &lt;code&gt;process_messages_from_queue.py&lt;/code&gt; script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(serviceb_venv) amazon-sqs-and-django git:(part-2) python3 process_messages_from_queue.py

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, open a second terminal, and start up the django app server. Remember, you would need to activate the virtual environment specific to the django app in the second terminal. Now, Let's go to postman and send a new POST request to process our &lt;code&gt;test_file.txt&lt;/code&gt; file. What you should see, eventually, in your terminal is something like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jkuJKY7J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1660255862667/7KQZtxBH5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jkuJKY7J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1660255862667/7KQZtxBH5.png" alt="terminal-output-from-consumer.png" width="880" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Through the print statements, we can see that the File obj, with ID &lt;code&gt;8&lt;/code&gt; in my case, which was initially empty and in &lt;code&gt;PROCESSING or 1&lt;/code&gt; state eventually got populated with the processed data and updated to &lt;code&gt;PROCESSED or 2&lt;/code&gt; state. This is what SQS can offer us; it allows us to postpone the actual processing, and we update the database with our results when it is done.&lt;/p&gt;

&lt;h3&gt;
  
  
  One Producer, Multiple Consumers
&lt;/h3&gt;

&lt;p&gt;At this point, we already know SQS limits the rate at which requests are processed which in turn helps to avoid overloading a server. But, If we needed to process more messages at a quicker rate, we could add more consumer instances to split the load, reducing the total processing time required. It would look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s-LPCYJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1660246024823/LYf-KXEk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s-LPCYJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1660246024823/LYf-KXEk4.png" alt="consumers-poll-from-same-queue.png" width="880" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an endpoint to check file processing result/status
&lt;/h2&gt;

&lt;p&gt;We received the message, processed the file, and saved the data, We still need to be able to let the user view the result of the process; if it failed or not. Update the &lt;code&gt;FileView&lt;/code&gt; in the views.py file of our Django app like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
from rest_framework.response import Response
from .models import File
from django.shortcuts import get_object_or_404 # &amp;lt;----- NEW

Class FileView(APIView):
    def post(self, request):
        ....

    # &amp;lt;----- ADD THIS -----&amp;gt;
    def get(self, request, pk: int):
        file = get_object_or_404(File, pk=pk)
        return Response({
                'lines_count': file.lines_count,
                'file_size': file.file_size,
                'status': file.status
            })
    # &amp;lt;-------- END --------&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next up, update our project &lt;code&gt;urls.py&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.urls import path
from .views import FileView

urlpatterns = [
    path('process', FileView.as_view()),
    path('check-status/&amp;lt;int:pk&amp;gt;', FileView.as_view()), # &amp;lt;---- NEW
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, inside postman, we can query our new &lt;code&gt;check-status&lt;/code&gt; endpoint. We will replace pk with our file object id, in my case, that would be &lt;code&gt;8&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0xp2oypR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1660256469535/rOlDR0_Uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0xp2oypR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1660256469535/rOlDR0_Uz.png" alt="postman-check-status-test-endpoint.png" width="880" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, a user can always confirm the status of their file processing request, which could be &lt;code&gt;PENDING&lt;/code&gt;, &lt;code&gt;PROCESSING&lt;/code&gt;, &lt;code&gt;FAILED&lt;/code&gt;, or &lt;code&gt;PROCESSED&lt;/code&gt; (In which case, the processed data is also shown).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it! You now have a fair understanding of SQS and how to integrate it into an application. Thanks for reading and coding along in this 2 part series on Understanding Amazon SQS with Python and Django. If you missed it, heres part 1 &lt;a href="https://dev.to/leviackerman/understanding-amazon-sqs-with-python-and-django-part-1-1hf7"&gt;Understanding Amazon SQS with Python and Django - Part 1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you found this article useful or learned something new, consider leaving a thumbs up and following me to keep up-to-date with any recent postings!&lt;/p&gt;

&lt;p&gt;Till next time, happy coding!&lt;/p&gt;

&lt;p&gt;Levi&lt;/p&gt;

</description>
      <category>sqs</category>
      <category>python</category>
      <category>django</category>
      <category>aws</category>
    </item>
    <item>
      <title>Understanding Amazon SQS with Python and Django - Part 1</title>
      <dc:creator>Ademola Akinsola</dc:creator>
      <pubDate>Sun, 24 Jul 2022 15:18:00 +0000</pubDate>
      <link>https://dev.to/leviackerman/understanding-amazon-sqs-with-python-and-django-part-1-1hf7</link>
      <guid>https://dev.to/leviackerman/understanding-amazon-sqs-with-python-and-django-part-1-1hf7</guid>
      <description>&lt;h1&gt;
  
  
  Objectives
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Introduce Amazon SQS and Queues.&lt;/li&gt;
&lt;li&gt;Look into the different types of queues and their differences.&lt;/li&gt;
&lt;li&gt;Understand how Amazon SQS would be used in a decoupled application or service.&lt;/li&gt;
&lt;li&gt;Build a simple Django app integrating Amazon SQS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Good knowledge of python.&lt;/li&gt;
&lt;li&gt;Knowledge of Django Rest Framework.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Amazon SQS?
&lt;/h2&gt;

&lt;p&gt;Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Simply put, It is a service provided by AWS that uses the queue data structure underneath, to store messages it receives until they are ready to be retrieved.&lt;/p&gt;

&lt;h3&gt;
  
  
  What exactly is a queue?
&lt;/h3&gt;

&lt;p&gt;A Queue is a data structure that takes in and returns data in the order of F.I.F.O (First in First Out). It's the same as a real-world queue, the first person to enter the queue is the first one to leave as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQS Queues
&lt;/h3&gt;

&lt;p&gt;SQS queues are queues we send messages to (JSON, XML, e.t.c.) and then later &lt;a href="https://www.educative.io/answers/what-is-http-long-polling" rel="noopener noreferrer"&gt;poll&lt;/a&gt; to retrieve those messages. The service sending the message is called a &lt;strong&gt;Producer&lt;/strong&gt; and the service polling the message is called a &lt;strong&gt;Consumer&lt;/strong&gt;. The size limit of a single message is &lt;strong&gt;256KB&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658353989323%2FjAqNHqiJc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658353989323%2FjAqNHqiJc.png" alt="aws-sqs-what-is-a-queue.png"&gt;&lt;/a&gt;Image from &lt;a href="https://dzone.com" rel="noopener noreferrer"&gt;dzone.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon SQS provides us with two types of message queues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Standard Queue&lt;/strong&gt; : These queues provide at-least-once delivery of messages, this means all messages are &lt;strong&gt;delivered at least once&lt;/strong&gt; , however, some messages may be delivered more than once. It also provides &lt;strong&gt;best-effort ordering&lt;/strong&gt; , which means its possible messages are not delivered in the order that they were received.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FIFO Queue&lt;/strong&gt; (First-in-First-out): These are queues designed for &lt;strong&gt;Only-Once delivery&lt;/strong&gt; , meaning all messages will be processed exactly once. FIFO queue also guarantees &lt;strong&gt;ordered delivery&lt;/strong&gt; , i.e, The FIRST message IN, is the FIRST message OUT(FIFO).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Looking at the above explanations, we might want to ask ourselves, Why exactly would we want to use a Standard Queue? Well,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Standard Queues allow &lt;strong&gt;near an unlimited number of transactions per second&lt;/strong&gt; while FIFO queues only allow processing of a max of &lt;strong&gt;300 messages transactions per second&lt;/strong&gt; without batching, and &lt;strong&gt;3000 with batching&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Standard queue is currently supported in all AWS regions, while FIFO, at the time of writing, is not.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both are suited for different use cases. A FIFO queue is well suited for applications that highly depend on the order of the messages and have a low tolerance for duplicated messages. A standard Queue is suited for applications that are able to process duplicate and out-of-order messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Amazon SQS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Increased performance&lt;/strong&gt; - SQS allows for asynchronous communication between different parts of an application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increases Reliability&lt;/strong&gt; - If the consumer service fails or crashes. Since the messages still exist in the queue, it can pick them up again when it's back online. This leads to increased reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; - A 'standard SQS' queue allows close to an unlimited number of message transactions per second. This makes it easier for your service to scale from thousands of requests per second to millions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Buffer requests&lt;/strong&gt; - SQS helps to prevent overloading a service with crazy amounts of requests. With SQS, the service can choose how many requests it wants to work on at any given time, and SQS will never return more than this to the service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Example use-case for Amazon SQS
&lt;/h1&gt;

&lt;p&gt;Imagine you have a service that currently:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Takes in URL paths to large files from a user.&lt;/li&gt;
&lt;li&gt;Does some time-consuming processing on the file, and saves the result into the database. &lt;/li&gt;
&lt;li&gt;And eventually returns the result to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658489216541%2FdfNkWYVB0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658489216541%2FdfNkWYVB0.png" alt="coupled-application.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What are the possible problems here?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If service B gets overloaded with a high amount of files to process, it could slow the server down, eventually leading to the server crashing.&lt;/li&gt;
&lt;li&gt;If the entire server ever happens to go down, it would lead to failure for all users currently using the service to process files.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How can amazon SQS be used to solve this? Using SQS, we can take out our file processing function and place it on a new dedicated server, hence "decoupling" service A from the file processing function(service B).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658591879308%2FHa4fF1cIM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658591879308%2FHa4fF1cIM.png" alt="decoupled-application-using-amazon-sqs.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow of the service using SQS could now be as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sends a request to our server 1 with the file path for processing.&lt;/li&gt;
&lt;li&gt;Service A saves the name of the file and the file path to the database with a status field of "Pending" and gets the ID of the newly saved object.&lt;/li&gt;
&lt;li&gt;Service A sends a message to the queue containing the ID of the saved object before returning a response to the user indicating the file is now in the queue pending processing.&lt;/li&gt;
&lt;li&gt;Service B polls specific amounts of messages from the queue to process at its own pace(This prevents the server from getting overloaded and crashing). It then gets the ID's from those messages.&lt;/li&gt;
&lt;li&gt;Service B uses the ID to retrieve the saved object from the database, get the file path, and start processing the file.&lt;/li&gt;
&lt;li&gt;Once it succeeds or fails, it updates the stored object to an appropriate status (Succeeded/Failed).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Server 1 also has an endpoint that users can utilize to check up on the status of their file processing request and its data if it has already been processed successfully.&lt;/p&gt;

&lt;h1&gt;
  
  
  Project Setup
&lt;/h1&gt;

&lt;p&gt;We will be building out the file processing service example. For this, we would need to create an &lt;a href="https://aws.amazon.com/console/" rel="noopener noreferrer"&gt;aws account&lt;/a&gt; and get our &lt;a href="https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html" rel="noopener noreferrer"&gt;aws access and secret access keys&lt;/a&gt; from the console. Once you've gotten your keys store them safely.&lt;/p&gt;

&lt;p&gt;Let's also create our first Queue. With the search bar, navigate to the Amazon SQS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1657793723435%2FPULqeN5hh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1657793723435%2FPULqeN5hh.png" alt="aws-console-searchbar.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now on the SQS page, click on Create queue. We are taken to the SQS Queue creation page and we see some options presented to us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658582233331%2FAN2VZmWdT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658582233331%2FAN2VZmWdT.png" alt="aws-sqs-queue-create-page.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this tutorial, Choose the FIFO Queue, and then enter a name for your queue. Note that FIFO queue names must end with &lt;code&gt;.fifo&lt;/code&gt;. Look for the &lt;code&gt;Content-based deduplication&lt;/code&gt; option and turn it on(This helps to prevent consumers from picking up a possible duplicate message from the queue). Leave every other option as the default and click &lt;code&gt;Create queue&lt;/code&gt; at the bottom. Copy the name of the Queue, in my case, &lt;code&gt;MyFileProcessingQueue.fifo&lt;/code&gt;, and store it somewhere, we would be using it later. Also, take note of the region name where your queue is created, you can view it from the top right of the console. For me, that would be &lt;code&gt;us-east-1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658347620467%2FjMQ3Quns3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658347620467%2FjMQ3Quns3.png" alt="aws-region-name-dropdown.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now into the Django part, let's begin by creating a new directory and setting up a new project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir amazon-sqs-django &amp;amp;&amp;amp; cd amazon-sqs-django
$ python3.8 -m venv venv # --&amp;gt; Create virtual environment.
$ source venv/bin/activate # --&amp;gt; Activate virtual environment.

(venv)$ pip install django==4.0.5 djangorestframework==3.13.1
(venv)$ django-admin startproject core .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will also install &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="noopener noreferrer"&gt;boto3&lt;/a&gt; to help us communicate more easily with our SQS Queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(venv)$ pip install boto3==1.24.20

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's create a Django app called &lt;code&gt;file&lt;/code&gt; which will hold the models and logic for the entire service A.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(venv)$ python manage.py startapp file

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Register the app in core/settings.py inside &lt;code&gt;INSTALLED_APPS&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# *core/settings.py*

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    'file', # new
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Creating our File model
&lt;/h1&gt;

&lt;p&gt;Inside file/models.py, add the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# *file/models.py*

...
class File(models.Model):
    class FileStatus(models.IntegerChoices):
        PENDING = 0
        PROCESSING = 1
        PROCESSED = 2
        FAILED = 3

    lines_count = models.IntegerField(null=True)
    file_size = models.IntegerField(null=True)
    file_path = models.CharField(max_length=120)
    status = models.IntegerField(choices=FileStatus.choices, default=FileStatus.PENDING)
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This model will store some basic information that we "process" from the files. The status indicates what stage of the processing, the actual file is in. We have four status types:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;PENDING&lt;/strong&gt; : This means the file has not yet been picked up for processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PROCESSING&lt;/strong&gt; : The file has been picked up, and work is being done on it currently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PROCESSED&lt;/strong&gt; : The file was successfully processed; no issues were encountered.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FAILED&lt;/strong&gt; : Something went wrong somewhere; the file could not be processed correctly.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Creating the view
&lt;/h1&gt;

&lt;p&gt;Inside the views.py, let's add the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

from rest_framework.views import APIView
from .models import File

session = boto3.Session(
    aws_access_key_id='&amp;lt;AWS_ACCESS_KEY_ID&amp;gt;', # replace with your key
    aws_secret_access_key='&amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt;', # replace with your key
)
sqs = session.resource('sqs', region_name='&amp;lt;AWS_REGION_NAME&amp;gt;') # replace with your region name

class FileView(APIView):
    """
    Process file and saves its data.
    :param file_path: path to file on a remote or local server
    :return: status
    """
    def post(self, request):
        file_path = request.data.get('file_path')
        file_obj = File.objects.create(file_path=file_path) # save file unprocessed.

        # Get our recently created queue.
        queue = sqs.get_queue_by_name(QueueName="MyFileProcessingQueue.fifo")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;code&gt;boto3&lt;/code&gt;, we instantiate a utility class &lt;code&gt;sqs&lt;/code&gt; for interacting with the Amazon SQS resource. Then we create a view and now inside its post method, we get the file path from the user and save it into the database. The saved File object has a default status of &lt;code&gt;PENDING&lt;/code&gt; which we previously set in the model class. We also get our queue by its name, which in my case, is &lt;code&gt;MyFileProcessingQueue.fifo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To send a message to our queue, let's update the view&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json # new
import boto3

from rest_framework.views import APIView
from rest_framework.response import Response # new
from .models import File

.....

class FileView(APIView):

    def post(self, request):
        .....
        message_body = {
            'file_id': str(file_obj.id)
        }

        # Send a message to the queue, so we can process this particular file eventually.
        response = queue.send_message(
            MessageBody=json.dumps(message_body),
            MessageGroupId='messageGroupId'
        )

        # Let the user know the file has been sent to the queue and is PENDING processing.
        return Response({"message": "File has been scheduled for processing..."}, 
    status=200)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We send an sqs message containing our saved File object ID, so we can later use it to retrieve the object from the database and get the file path to be processed. The&lt;code&gt;MessageGroupId&lt;/code&gt; ensures that all messages with the same &lt;code&gt;MessageGroupId&lt;/code&gt; are processed in a FIFO order. Usually, this would be set to something unique like the user ID or session ID, but for simplicity, we use the string 'messageGroupId'. You can learn more about the possible parameters for the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Queue.send_message" rel="noopener noreferrer"&gt;send_message&lt;/a&gt; method.&lt;/li&gt;
&lt;li&gt;After sending the message to our SQS queue. We send a response immediately to the user letting them know their file is now in the queue pending processing, so they can go about their activities with our service without concern.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your view should now look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# *file/views.py*
import json
import boto3

from rest_framework.views import APIView
from rest_framework.response import Response
from .models import File

session = boto3.Session(
    aws_access_key_id='&amp;lt;AWS_ACCESS_KEY_ID&amp;gt;', # replace with your key
    aws_secret_access_key='&amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt;', # replace with your key
)
sqs = session.resource('sqs', region_name='&amp;lt;AWS_REGION_NAME&amp;gt;') # replace with your region name

class FileView(APIView):
    """
    Process file and saves its data.
    :param file_path: path to file.
    :return: status
    """
    def post(self, request):
        file_path = request.data.get('file_path')
        file_obj = File.objects.create(file_path=file_path) # save file unprocessed.

        # Get our recently created queue.
        queue = sqs.get_queue_by_name(QueueName="MyFileProcessingQueue.fifo")
        message_body = {
            'file_id': str(file_obj.id)
        }

        # Send a message to the queue, so we can process this particular file eventually.
        response = queue.send_message(
            MessageBody=json.dumps(message_body),
            MessageGroupId='messageGroupId'
        )

        # Let the user know the file has been sent to the queue and is PENDING processing.
        return Response({"message": "File has been scheduled for processing..."}, 
    status=200)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Testing what we have so far
&lt;/h1&gt;

&lt;p&gt;First, let's connect the &lt;code&gt;file&lt;/code&gt; app URLs to our project URLs. Inside the &lt;code&gt;file&lt;/code&gt; folder, create a new file &lt;code&gt;urls.py&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# file/urls.py

from django.urls import path
from .views import FileView

urlpatterns = [
    path('process', FileView.as_view())
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now edit the &lt;code&gt;core/urls.py&lt;/code&gt; file,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.contrib import admin
from django.urls import path, include # new

urlpatterns = [
    path('admin/', admin.site.urls),
    path('files/', include('file.urls')), # new
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, make and run migrations&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(venv)$ python manage.py makemigrations
(venv)$ python manage.py migrate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now start the development server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(venv)$ python manage.py runserver

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For easy testing, we will create a file inside the root folder named &lt;code&gt;test_file.txt&lt;/code&gt; and fill it up with 10 or more lines just so it's not empty. This will be our file to be processed. Now your folder path should look like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amazon-sqs-and-django
 core
 db.sqlite3
 file
 manage.py
 test_file.txt
 venv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, launch postman and test the &lt;code&gt;http://127.0.0.1:8000/files/process&lt;/code&gt; endpoint with the absolute file path to the &lt;code&gt;test_file.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658624011303%2FEgyKvg9vF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658624011303%2FEgyKvg9vF.png" alt="test-django-endpoint-with-postman.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's head back over to the &lt;a href="https://console.aws.amazon.com/sqs/v2/home" rel="noopener noreferrer"&gt;queues&lt;/a&gt; page on the AWS console and click our recently created queue - MyFileProcessingQueue.fifo. Then, on the top right of the new page, click &lt;code&gt;Send and Receive messages&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658589614083%2F8thw8Xatw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658589614083%2F8thw8Xatw.png" alt="aws-sqs-queue-page.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, click on &lt;code&gt;poll messages&lt;/code&gt; and click the first message that comes in, you should see the data we sent from the Django app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658594209572%2FHoblYqVfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658594209572%2FHoblYqVfy.png" alt="aws-sqs-queue-message.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we know the first part of our service works. We are now able to send a message from our Django app into an SQS queue.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion and Next Steps
&lt;/h1&gt;

&lt;p&gt;In this first part of this AWS SQS python and Django article, you learned the basics of queues in general and Amazon SQS, its advantages, and some of its use cases. We learned how to create a queue and built a Django app to send messages to that queue.&lt;/p&gt;

&lt;p&gt;In the next article, we will&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build out our &lt;code&gt;service B&lt;/code&gt;, the File processing service that will consume messages from the queue, process the file, and update the database.&lt;/li&gt;
&lt;li&gt;We will also provide an endpoint for the user to check on the result of the file processing at any given point in time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you found this article useful or learned something new, consider leaving a thumbs up!&lt;/p&gt;

&lt;p&gt;Till next time, happy coding!&lt;/p&gt;

&lt;p&gt;Levi&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>sqs</category>
      <category>django</category>
    </item>
  </channel>
</rss>
