<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: felix715</title>
    <description>The latest articles on DEV Community by felix715 (@felix715).</description>
    <link>https://dev.to/felix715</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/felix715"/>
    <language>en</language>
    <item>
      <title>DATA CLEANING IN SPSS</title>
      <dc:creator>felix715</dc:creator>
      <pubDate>Tue, 28 Mar 2023 11:35:45 +0000</pubDate>
      <link>https://dev.to/felix715/data-cleaning-in-spss-28ge</link>
      <guid>https://dev.to/felix715/data-cleaning-in-spss-28ge</guid>
      <description>&lt;h2&gt;
  
  
  Data Cleaning in SPSS
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Data Cleaning is the process of preparing data for analysis by removing or modifying the data that is incorrect,missing,irrelevant,duplicated,or improperly formatted.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why is it important to clean the data.
&lt;/h2&gt;

&lt;p&gt;This data is usually not necessary or helpful when it comes to analyzing data because it may hinder the process or provide inacurrate results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps of Data cleaning.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Missing Value Analysis&lt;/li&gt;
&lt;li&gt;Out-of-the-Range Values&lt;/li&gt;
&lt;li&gt;Detecting and Removing OutLiers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Or just follow this brief article given here below:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;SPSS offers a variety of tools to clean and prepare data for analysis. Here are some steps you can follow to perform data cleaning in SPSS:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Import the data into SPSS:&lt;/strong&gt; The first step is to import the data into SPSS by selecting File &amp;gt; Open &amp;gt; Data. Ensure that the data is in a suitable format, such as CSV or Excel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check for missing values:&lt;/strong&gt; Use the Frequencies procedure to check for missing values in your dataset. Missing values can be indicated by a blank space or some other symbol in your data. You can replace missing values with the mean, median, or mode of the variable, or delete cases with missing data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identify and handle outliers:&lt;/strong&gt; Use the Descriptive Statistics procedure to identify outliers in your data. Outliers are extreme values that can skew your results. You can remove outliers by deleting the cases or transforming the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check for duplicate records:&lt;/strong&gt; Use the Data &amp;gt; Select Cases &amp;gt; Duplicate Cases procedure to identify any duplicate records in your dataset. You can remove duplicates by deleting the cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recode variables:&lt;/strong&gt; Use the Transform &amp;gt; Recode into Different Variables procedure to recode variables as necessary. For example, you may need to convert text data to numeric data or group data into categories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rename variables:&lt;/strong&gt; Use the Variable View window to rename variables to more meaningful names. This will make it easier to understand your data and create tables and charts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check for data entry errors:&lt;/strong&gt; Use the Data &amp;gt; Validate &amp;gt; Data Entry procedure to check for data entry errors, such as incorrect values or inconsistent responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save the cleaned dataset:&lt;/strong&gt; Once you have completed the data cleaning process, save the cleaned dataset by selecting File &amp;gt; Save As. Give the dataset a new name to differentiate it from the original dataset.&lt;/p&gt;

</description>
      <category>python</category>
      <category>spss</category>
      <category>data</category>
      <category>datascience</category>
    </item>
    <item>
      <title>NEURAL NETWORK</title>
      <dc:creator>felix715</dc:creator>
      <pubDate>Thu, 19 Jan 2023 10:33:19 +0000</pubDate>
      <link>https://dev.to/felix715/neural-network-3830</link>
      <guid>https://dev.to/felix715/neural-network-3830</guid>
      <description>&lt;h2&gt;
  
  
  Neural Network Introduction
&lt;/h2&gt;

&lt;p&gt;In this article, we will be talking about neural networks. A functional unit of deep learning, this means a neural network accepts input and gives an output. Deep Learning uses Artificial Neural Networks (ANN). ANNs imitates the human brain’s behavior to solve complex data problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application
&lt;/h3&gt;

&lt;p&gt;These technologies solve problems in image recognition, speech recognition, pattern recognition, and natural language processing (NLP), to name a few.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Neural Network Overview
&lt;/h3&gt;

&lt;p&gt;Have you ever wondered how your brain recognizes images? No matter what or how the image looks, the brain can tell that this is an image of a cat and not a dog. The brain relates to the best possible pattern and concludes the result. The example below will help you understand neural networks:&lt;br&gt;
Consider a scenario where you have a set of labeled images, and you have to classify the images based on if it is a dog or a cat. To create a neural network that recognizes images of cats and dogs. The network starts by processing the input. Each image is made of pixels. For example, the image dimensions might be 20 X 20 pixels that make 400 pixels. Those 400 pixels would make the first layer of our neural network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95njuhuq47y0k6hsri3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95njuhuq47y0k6hsri3o.png" alt="Image description" width="800" height="676"&gt;&lt;/a&gt;&lt;br&gt;
A neural network is made of artificial neurons that receive and process input data. Data is passed through the input layer, the hidden layer, and the output layer.A neural network process starts when input data is fed to it. Data is then processed via its layers to provide the desired output. A neural network learns from structured data and exhibits the output. Learning taking place within neural networks can be in three different categories: &lt;br&gt;
Supervised Learning - with the help of labeled data, inputs, and outputs are fed to the algorithms. They then predict the desired result after being trained on how to interpret data.&lt;br&gt;
Unsupervised Learning - ANN learns with no human intervention. There is no labeled data, and output is determined according to patterns identified within the output data.&lt;br&gt;
Reinforcement Learning - the network learns depending on the feedback you give it.&lt;br&gt;
The essential building block of a neural network is a perceptron or neuron. It uses the supervised learning method to learn and classify data. We will learn more about the perceptron later in this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Neural Networks work
&lt;/h3&gt;

&lt;p&gt;Neural Networks are complex systems with artificial neurons.&lt;br&gt;
Artificial neurons or perceptron consist of:&lt;br&gt;
Input&lt;br&gt;
Weight&lt;br&gt;
Bias&lt;br&gt;
Activation Function&lt;br&gt;
Output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pphvdumvlmqo1awws7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pphvdumvlmqo1awws7s.png" alt="Image description" width="600" height="300"&gt;&lt;/a&gt;&lt;br&gt;
The neurons receive many inputs and process a single output. Neural networks are composed of layers of neurons. These layers consist of the following:&lt;br&gt;
Input layer&lt;br&gt;
Multiple hidden layers&lt;br&gt;
Output layer&lt;br&gt;
The input layer receives data represented by a numeric value. Hidden layers perform the most computations required by the network. Finally, the output layer predicts the output.&lt;br&gt;
In a neural network, neurons dominate one another. Each layer is made of neurons. Once the input layer receives data, it is redirected to the hidden layer. Each input is assigned with weights.&lt;br&gt;
The weight is a value in a neural network that converts input data within the network’s hidden layers. Weights work by input layer, taking input data, and multiplying it by the weight value.&lt;br&gt;
It then initiates a value for the first hidden layer. The hidden layers transform the input data and pass it to the other layer. The output layer produces the desired output.&lt;br&gt;
The inputs and weights are multiplied, and their sum is sent to neurons in the hidden layer. Bias is applied to each neuron. Each neuron adds the inputs it receives to get the sum. This value then transits through the activation function.&lt;br&gt;
The activation function outcome then decides if a neuron is activated or not. An activated neuron transfers information into the other layers. With this approach, the data gets generated in the network until the neuron reaches the output layer.&lt;br&gt;
Another name for this is forward propagation. Feed-forward propagation is the process of inputting data into an input node and getting the output through the output node. (We’ll discuss feed-forward propagation a bit more in the section below).&lt;br&gt;
Feed-forward propagation takes place when the hidden layer accepts the input data. Processes it as per the activation function and passes it to the output. The neuron in the output layer with the highest probability then projects the result.&lt;br&gt;
If the output is wrong, backpropagation takes place. While designing a neural network, weights are initialized to each input. Backpropagation means re-adjusting each input’s weights to minimize the errors, thus resulting in a more accurate output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Neural Networks
&lt;/h3&gt;

&lt;p&gt;Neural networks are identified based on mathematical performance and principles to determine the output. Below we will go over different types of neural networks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Perceptron
&lt;/h3&gt;

&lt;p&gt;Minsky and Papert (&lt;a href="https://www.researchgate.net/publication/3081582_Review_of_'Perceptrons_An_Introduction_to_Computational_Geometry'_Minsky_M_and_Papert_S_1969" rel="noopener noreferrer"&gt;https://www.researchgate.net/publication/3081582_Review_of_'Perceptrons_An_Introduction_to_Computational_Geometry'_Minsky_M_and_Papert_S_1969&lt;/a&gt;)  proposed the Perceptron model (Single-layer neural network). They said it was modeled after how the human brain functions.&lt;br&gt;
It is one of the simplest models that can learn and solve complex data problems using neural networks. Perceptron is also called an artificial neuron.&lt;br&gt;
A perceptron network is comprised of two layers:&lt;br&gt;
Input Layer  (&lt;a href="https://www.techopedia.com/definition/33262/input-layer-neural-networks" rel="noopener noreferrer"&gt;https://www.techopedia.com/definition/33262/input-layer-neural-networks&lt;/a&gt;)&lt;br&gt;
Output Layer (&lt;a href="https://www.techopedia.com/definition/33263/output-layer-neural-networks" rel="noopener noreferrer"&gt;https://www.techopedia.com/definition/33263/output-layer-neural-networks&lt;/a&gt;)&lt;br&gt;
The input layer computes the weighted input for every node. The activation function is used to get the result as output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bj01koi3lpuy76hm122.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bj01koi3lpuy76hm122.png" alt="Image description" width="601" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Feed Forward Neural Network
&lt;/h3&gt;

&lt;p&gt;In a feed-forward network, data moves in a single direction. It enters via the input nodes and leaves through output nodes.This is a front propagation wave.&lt;br&gt;
By moving data in one direction, there is no backpropagation. The backpropagation algorithm calculates the gradient of the loss function with consideration to weights in the network. The input product sum and their weights are computed. The data later is transferred to the output. A couple of feed-forward neural networks applications are:&lt;br&gt;
Speech Recognition&lt;br&gt;
Facial Recognition&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ebig37msx1bimw1musy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ebig37msx1bimw1musy.png" alt="Image description" width="472" height="284"&gt;&lt;/a&gt;&lt;br&gt;
Radial Basis Function Neural Network&lt;br&gt;
Radial Basis Function Neural Networks (RBF are comprised of three layers:&lt;br&gt;
Input Layer&lt;br&gt;
Hidden Layer&lt;br&gt;
Output Layer&lt;br&gt;
RBF networks classify data based on the distance of any centered point and interpolation. Interpolation resizes images. Classification is executed by estimating the input data where each neuron reserves the data. RBF networks look for similar data points and group them. RBF networks classify data based on the distance of any centered point and interpolation. Interpolation resizes images. Classification is executed by estimating the input data where each neuron reserves the data. RBF networks look for similar data points and group them. According to Dr. Saed Sayad, the sum and weights of hidden layer output sent to the output layer form a network of outputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn0qmhx29doxao55hwp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn0qmhx29doxao55hwp0.png" alt="Image description" width="500" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Recurrent Neural Network
&lt;/h3&gt;

&lt;p&gt;Neural networks such as a feed-forward networks move data in one direction. This type of network has the disadvantage of not remembering the data in past inputs. This is where RNNs comes into play. RNNs do not work like standard neural networks. A Recurrent Neural Network (RNN) is a network good at modeling sequential data. Sequential data means data that follow a particular order in that a thing follows another. In RNN, the output of the previous stage goes back in as an input of the current step. RNN is a feedback neural network. Saving the output helps make other decisions.&lt;br&gt;
In RNNs, data runs through a loop, such that each node remembers data in the previous step. For example: Let’s say you are taking five classes this semester, and this is your schedule: Monday = Cryptography, Tuesday = Audit of Information Systems, Wednesday = Advanced Database, Thursday = Java, and Friday = Business intelligence. For the NN to tell you the class you are studying (any given day), it has to be able to “look” at the class studied the day before.&lt;br&gt;
With the example above, you can tell the output must go back in as input to decide the next output.RNNs have a memory that helps the network recall what happened earlier in the sequence data. While carrying out operations, neurons also act as memory cells.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bluhjtxlzmi1yl30129.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bluhjtxlzmi1yl30129.jpeg" alt="Image description" width="341" height="148"&gt;&lt;/a&gt;&lt;br&gt;
RNN are used to solve problems in stock predictions, text data, and audio data. In other words, it’s used to solve similar problems in text-to-speech conversion and language translation. Learn more about text generation with RNN &lt;/p&gt;

&lt;h3&gt;
  
  
  Convolution Neural Network
&lt;/h3&gt;

&lt;p&gt;Convolutional Neural Networks (CNN) are commonly used for image recognition. CNNs contain three-dimensional neuron arrangement. The first stage is the convolutional layer. Neurons in a convolutional layer only process information from a small part of the visual field (image). Input features in convolution are abstracted in batches.&lt;br&gt;
The second stage is pooling. It reduces the dimensions of the features and, at the same time, sustains valuable data. CNNs launch into the third phase (fully connected neural network) when the features get to the right granularity level.&lt;br&gt;
At the final stage, the final probabilities are analyzed and decide which class the image belongs to.This type of network understands the image in parts. It also computes the operations multiple times to complete the processing of the image.Image processing involves conversion from RGB to a grey-scale. After the image is processed, modifications in pixel value aid in identifying the edges. The images also get grouped into different classes. CNN is mainly used in signal and image processing. An article that may help shed some light on how general computer vision worK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk48t3z6dw5xmt681u6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk48t3z6dw5xmt681u6r.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Modular Neural Network
&lt;/h3&gt;

&lt;p&gt;A Modular Neural Network (MNN) is composed of unassociated networks working individually to get the output. The various neural networks do not interact with each other. Each network has a unique set of inputs compared to other networks.&lt;br&gt;
MNN is advantageous because large and complex computational processes are done faster. Processes are broken down into independent components, thus increasing the computational speed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmkel8j6516qamo6vi95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmkel8j6516qamo6vi95.png" alt="Image description" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Applications of Neural Networks
&lt;/h3&gt;

&lt;p&gt;Neural networks are effectively applied to several fields to resolve data issues, some examples are listed below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Facial Recognition
&lt;/h3&gt;

&lt;p&gt;Neural networks are playing a significant role in facial recognition. Some smartphones can identify the age of a person. This is based on facial features and visual pattern recognition.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weather Forecasting
&lt;/h3&gt;

&lt;p&gt;Neural networks are trained to recognize the patterns and identify distinct kinds of weather. Weather forecasting, with the help of neural networks, not only predicts the weather.&lt;/p&gt;

&lt;h3&gt;
  
  
  Music composition
&lt;/h3&gt;

&lt;p&gt;Neural networks are mastering patterns in sounds and tunes. These networks train themselves adequately to create new music. They are also being used in music composition software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image processing and Character recognition
&lt;/h3&gt;

&lt;p&gt;Neural networks can recognize and learn patterns in an image. Image processing is a growing field.&lt;/p&gt;

&lt;h4&gt;
  
  
  Image recognition is used in:
&lt;/h4&gt;

&lt;p&gt;Facial recognition.&lt;br&gt;
Cancer cell detection.&lt;br&gt;
Satellite imagery processing for use in defense and agriculture.&lt;/p&gt;

&lt;p&gt;Character recognition is helping to detect fraud and national security assessments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages of Neural Networks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Fault tolerance
&lt;/h4&gt;

&lt;p&gt;In a neural network, even if a few neurons are not working properly, that would not prevent the neural networks from generating outputs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Real-time Operations
&lt;/h4&gt;

&lt;p&gt;Neural networks can learn synchronously and easily adapt to their changing environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Adaptive Learning
&lt;/h4&gt;

&lt;p&gt;Neural networks can learn how to work on different tasks. Based on the data given to produce the right output.&lt;/p&gt;

&lt;h4&gt;
  
  
  Parallel processing capacity
&lt;/h4&gt;

&lt;p&gt;Neural networks have the strength and ability to perform multiple jobs simultaneously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disadvantages of Neural Networks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Unexplained behavior of the network
&lt;/h4&gt;

&lt;p&gt;Neural networks provide a solution to a problem. Due to the complexity of the networks, it doesn’t provide the reasoning behind “why and how” it made the decisions it made. Therefore, trust in the network may be reduced.&lt;/p&gt;

&lt;h4&gt;
  
  
  Determination of appropriate network structure
&lt;/h4&gt;

&lt;p&gt;There is no specified rule (or rule of thumb) for a neural network procedure. A proper network structure is achieved by trying the best network, in a trial and error approach. It is a process that involves refinement.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hardware dependence
&lt;/h4&gt;

&lt;p&gt;The pieces of equipment of a neural network are dependent on one another. By which we mean, that neural networks require (or are highly dependent on) processors with adequate processing capacity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The neural network field is rapidly expanding. It is critical to learn and grasp the concepts in this sector in order to work with them. This article has discussed the many types of neural networks. By investigating this discipline, you may utilize neural networks to tackle data challenges in other domains.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DJANGO WEB DEVELOPMENT IN PYTHON.</title>
      <dc:creator>felix715</dc:creator>
      <pubDate>Sat, 20 Nov 2021 08:27:53 +0000</pubDate>
      <link>https://dev.to/felix715/django-web-development-in-python-3120</link>
      <guid>https://dev.to/felix715/django-web-development-in-python-3120</guid>
      <description>&lt;p&gt;&lt;strong&gt;Learn about the basics of web development using Django to build blog applications that have the (CRUD) Create, Read, Update, Delete functionality.&lt;/strong&gt;&lt;br&gt;
Django is a widely used free, open-source, and high-level web development framework. It provides a lot of features to the developers "out of the box," so development can be &lt;br&gt;
rapid. However, websites built from it are secured,scalable, and maintainable at the same time.&lt;br&gt;
&lt;strong&gt;Aim of the Article is to build a blog application.&lt;/strong&gt;&lt;br&gt;
The aim of this article is to build a blog application where the blog content can be created and updated through an administration panel. Blog contents are displayed on the page and can be deleted if needed.&lt;br&gt;
*&lt;strong&gt;&lt;em&gt;Overall application provides&lt;/em&gt;&lt;/strong&gt;* &lt;br&gt;
       CRUD(Create,Read,Update,Delete) functionality.&lt;br&gt;
&lt;strong&gt;Required Setup&lt;/strong&gt;&lt;br&gt;
Required Setup&lt;br&gt;
1.)Git Bash: The user of all operating systems can use it. All the Django related commands and Unix commands are done through it. For downloading the Git bash(&lt;a href="https://git-scm.com/downloads"&gt;https://git-scm.com/downloads&lt;/a&gt;)&lt;br&gt;
2.)Text-Editor: Any Text-Editor like Sublime Text,kite,Visual Studio Code can be used. &lt;br&gt;
3.)Python 3: The latest version of Python can be downloaded in (&lt;a href="https://www.python.org/downloads/"&gt;https://www.python.org/downloads/&lt;/a&gt;) &lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Environment
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Virtual Environment&lt;/strong&gt; acts as dependencies to the Python-related projects. It works as a self-contained container or an isolated environment where all the Python-related packages and the required versions related to a specific project are installed. Since newer versions of Python, Django, or packages, etc. will roll out, through the help of a Virtual Environment, you can work with older versions that are specific to your project. In Summary, you can start an independent project related to &lt;strong&gt;Django of version 2.0&lt;/strong&gt;, whereas another independent project related to &lt;strong&gt;Django of version 3.0&lt;/strong&gt; can be started on the same computer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to create a Virtual Environment
&lt;/h3&gt;

&lt;p&gt;1.)You can create the new directory named &lt;strong&gt;'project-blog'&lt;/strong&gt; by using &lt;strong&gt;'mkdir'&lt;/strong&gt; command in your Desktop.&lt;br&gt;
2.)Change the directory to &lt;strong&gt;'project-blog'&lt;/strong&gt; by using &lt;strong&gt;'cd'&lt;/strong&gt; command.&lt;br&gt;
3.)The virtual environment is created by using &lt;strong&gt;'python -m venv env'&lt;/strong&gt;, where &lt;strong&gt;env&lt;/strong&gt; is our virtual environment shown by &lt;strong&gt;'ls'&lt;/strong&gt; command.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JuSxPh6w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vy4bu2xtihdpjoax3cc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JuSxPh6w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vy4bu2xtihdpjoax3cc.PNG" alt="Image description" width="562" height="106"&gt;&lt;/a&gt;&lt;br&gt;
 4.)For Activating your Virtual Environment: The Virtual Environment can be activated by using the 'source' command where the 'Scripts' folder needs to be enabled or activated.&lt;br&gt;
5.)The 'env' will be shown in the parenthesis if you've successfully activated your Virtual Environment.&lt;br&gt;
Installing the required package: You can use 'pip install django' to install Django in your specific Virtual Environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4CEy-shL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gl1oescyfpmcqa4816p.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4CEy-shL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gl1oescyfpmcqa4816p.PNG" alt="Image description" width="561" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Django Project&lt;/strong&gt;&lt;br&gt;
The first step is creating your project by using the &lt;strong&gt;'django-admin startproject project_name'&lt;/strong&gt; command, where &lt;strong&gt;'project_name'&lt;/strong&gt; is &lt;strong&gt;'django_blog'&lt;/strong&gt; in your case. Also, it will generate a lot of files inside our newly created project, which you can research further in &lt;strong&gt;Django documentation&lt;/strong&gt; if needed.&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8gBBy8Zf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ez74pwylw8zz9tladkm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8gBBy8Zf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ez74pwylw8zz9tladkm.PNG" alt="Image description" width="555" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Change the directory to the newly created project using &lt;strong&gt;'cd'&lt;/strong&gt; command and to view the created file using &lt;strong&gt;'ls'&lt;/strong&gt; command.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xURQ2EXt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ml8i9lghub6lb1nunx3w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xURQ2EXt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ml8i9lghub6lb1nunx3w.PNG" alt="Image description" width="478" height="144"&gt;&lt;/a&gt;&lt;br&gt;
You can run your project by using 'python manage.py runserver'.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VY0v8Imm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xkjn4iqalxwpz7artsr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VY0v8Imm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xkjn4iqalxwpz7artsr.PNG" alt="Image description" width="555" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project can be viewed in your favorite browser (Google Chrome, Mozilla Firefox, etc.).You can come into your browser and type 'localhost:8000' or '127.0.0.1:8000' in the URL, as shown below.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GIucWQZu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqatykfz8slxqnud1amn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GIucWQZu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqatykfz8slxqnud1amn.PNG" alt="Image description" width="779" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Starting the new Project&lt;/strong&gt;&lt;br&gt;
For creating a new project in the Django, it's always a two-step process, which is shown below.&lt;/p&gt;

&lt;p&gt;The first step is to create an app by using 'python manage.py startapp app_name' command, where app_name is 'blog' in your case. In Django, there are many apps to the single project where each app serves as single and specific functionality to the particular project.&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rfDL0C9V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t403dhihysbj2phxjevy.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rfDL0C9V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t403dhihysbj2phxjevy.PNG" alt="Image description" width="605" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second step is to make our project let know about our newly created app by making changes to the &lt;strong&gt;'django_blog/settings.py'&lt;/strong&gt; &lt;strong&gt;INSTALLED_APP&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5T-MNezH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ke9k1lzoncxyf9i0em6z.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5T-MNezH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ke9k1lzoncxyf9i0em6z.PNG" alt="Image description" width="880" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Changing in our Models&lt;/strong&gt;&lt;br&gt;
Django uses &lt;strong&gt;'SQLite'&lt;/strong&gt; as the default database, which is light and only used for small projects, which is fine for this project. It uses &lt;strong&gt;'Object Relational Mapper(ORM)'&lt;/strong&gt; which makes it really easy to work with the database. The actual database code is not written, whereas the database tables are created through the help of &lt;strong&gt;'class'&lt;/strong&gt; keyword in &lt;strong&gt;'models.py'.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inside &lt;strong&gt;'blog/models.py'&lt;/strong&gt;, you need to create a new model named &lt;strong&gt;'Post'&lt;/strong&gt;. This is a class that will become a database table afterward which currently inherits from &lt;strong&gt;'models.Model'&lt;/strong&gt;. As in a standard blog, a certain 'Post' contains a title, which will be a field called &lt;strong&gt;CharField.&lt;/strong&gt; It is a text-based column and accepts mandatory argument as &lt;strong&gt;'max_length'&lt;/strong&gt;, which happens to be 50 in your case. Also, there is another field named &lt;strong&gt;'content'&lt;/strong&gt;, which is the &lt;strong&gt;TextField&lt;/strong&gt;, which contains the detail text of the &lt;strong&gt;'Post'&lt;/strong&gt; as in a standard blog. The double underscore*&lt;em&gt;('str')&lt;/em&gt;* method is defined, which overrides the &lt;/p&gt;

&lt;p&gt;field &lt;strong&gt;'title'&lt;/strong&gt; and returns the name of actual &lt;strong&gt;'title'&lt;/strong&gt; instead of some objects.&lt;br&gt;
&lt;strong&gt;Making a Migrations&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;'python manage.py makemigrations'&lt;/strong&gt; is a first step process which reads the &lt;strong&gt;'models.py'&lt;/strong&gt; after it's creation. It creates a new folder called &lt;strong&gt;'migrations'&lt;/strong&gt; where there is a file named &lt;strong&gt;'0001_initial.py'&lt;/strong&gt;, which are portable across the database.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GEd3V4YK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-%250Auploads.s3.amazonaws.com/uploads/articles/vd0s52mvknxfi8k63e5r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GEd3V4YK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-%250Auploads.s3.amazonaws.com/uploads/articles/vd0s52mvknxfi8k63e5r.PNG" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migrating to the database&lt;/strong&gt;&lt;br&gt;
This is the second step where 'python manage.py migrate' reads the newly created folder 'migrations' and creates the database, and it evolves the database when there is a change in the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NT7AL8gJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x38br2xcdmc1d6ja9lxh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NT7AL8gJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x38br2xcdmc1d6ja9lxh.PNG" alt="Image description" width="417" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>python</category>
      <category>github</category>
      <category>django</category>
    </item>
    <item>
      <title>GETTING STARTED WITH FASTAPI AND DOCKER.</title>
      <dc:creator>felix715</dc:creator>
      <pubDate>Wed, 10 Nov 2021 15:41:10 +0000</pubDate>
      <link>https://dev.to/felix715/getting-started-with-fastapi-and-docker-ggd</link>
      <guid>https://dev.to/felix715/getting-started-with-fastapi-and-docker-ggd</guid>
      <description>&lt;p&gt;FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.&lt;/p&gt;

&lt;h3&gt;
  
  
  The key features are:
&lt;/h3&gt;

&lt;p&gt;Fast: Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic). One of the fastest Python frameworks available.&lt;br&gt;
Fast to code: Increase the speed to develop features by about 200% to 300%. &lt;br&gt;
Fewer bugs: Reduce about 40% of human (developer) induced errors. &lt;br&gt;
Intuitive: Great editor support. Completion everywhere. &lt;br&gt;
Less time debugging.&lt;br&gt;
Easy: Designed to be easy to use and learn. Less time reading docs.&lt;br&gt;
Short: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs.&lt;br&gt;
Robust: Get production-ready code. With automatic interactive documentation.&lt;br&gt;
Standards-based: Based on (and fully compatible with) the open standards for APIs:&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements.
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Python 3.6+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  FastAPI stands on the shoulders of giants:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Starlette for the web parts.
  Pydantic for the data parts.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Installation of fastapi on windows.
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  pip install fastapi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Installation on Mac.
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $pip3 install fastapi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Example of a simple application on file main.py
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; from typing import Optional
 from fastapi import FastAPI
 app = FastAPI()
 @app.get("/")
 def read_root():
 return {"Hello": "World"}
 @app.get("/items/{item_id}")
 def read_item(item_id: int, q: Optional[str] = None):
 return {"item_id": item_id, "q": q}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Run the server with:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; uvicorn main:app --reload

 INFO:     Uvicorn running on http://127.0.0.1:8000 (Press 
 CTRL+C to quit)
 INFO:     Started reloader process [28720]
 INFO:     Started server process [28722]
 INFO:     Waiting for application startup.
 INFO:     Application startup complete.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  On opening your browser using
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; http://127.0.0.1:8000/items/5?q=somequery.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  You will see the JSON response as:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; {"item_id": 5, "q": "somequery"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  The interactive API docs
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Now go to &lt;a href="http://127.0.0.1:8000/docs" rel="noopener noreferrer"&gt;http://127.0.0.1:8000/docs&lt;/a&gt;.
&lt;/h4&gt;
&lt;h4&gt;
  
  
  You will see the automatic interactive API documentation.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh497t84080g1rk1fi3ti.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh497t84080g1rk1fi3ti.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Alternative API docs.
&lt;/h3&gt;
&lt;h4&gt;
  
  
  And now, go to &lt;a href="http://127.0.0.1:8000/redoc" rel="noopener noreferrer"&gt;http://127.0.0.1:8000/redoc&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmr95mp3jdygzqgmrdf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmr95mp3jdygzqgmrdf1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  OpenAPI
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   FastAPI generates a "schema" with all your API using the 
   OpenAPI standard for defining APIs.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
  
  
  "Schema"
&lt;/h5&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   A "schema" is a definition or description of something. Not 
   the code that implements it, but just an abstract 
   description.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
  
  
  API "schema"
&lt;/h5&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  In this case, OpenAPI is a specification that dictates how 
  to define a schema of your API.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This schema definition includes your API paths, the possible parameters they take, etc.&lt;/p&gt;

&lt;h5&gt;
  
  
  Data "schema"
&lt;/h5&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; The term "schema" might also refer to the shape of some data, 
 like a JSON content.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In that case, it would mean the JSON attributes, and data types they have, etc.&lt;/p&gt;

&lt;h5&gt;
  
  
  OpenAPI and JSON Schema
&lt;/h5&gt;

&lt;p&gt;OpenAPI defines an API schema for your API. And that schema includes definitions (or "schemas") of the data sent and received by your API using JSON Schema, the standard for JSON data schemas.&lt;/p&gt;

&lt;p&gt;Check the openapi.json&lt;br&gt;
If you are curious about how the raw OpenAPI schema looks like, FastAPI automatically generates a JSON (schema) with the descriptions of all your API.&lt;/p&gt;

&lt;p&gt;You can see it directly at: &lt;a href="http://127.0.0.1:8000/openapi.json" rel="noopener noreferrer"&gt;http://127.0.0.1:8000/openapi.json&lt;/a&gt;.&lt;/p&gt;

&lt;h6&gt;
  
  
  It will show a JSON starting with something like:
&lt;/h6&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
"openapi": "3.0.2",
"info": {
    "title": "FastAPI",
    "version": "0.1.0"
},
"paths": {
    "/items/": {
        "get": {
            "responses": {
                "200": {
                    "description": "Successful Response",
                    "content": {
                        "application/json": {



    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  What is OpenAPI for
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  The OpenAPI schema is what powers the two interactive 
  documentation systems included.

  And there are dozens of alternatives, all based on OpenAPI. 
  You could easily add any of those alternatives to your 
  application built with FastAPI.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You could also use it to generate code automatically, for clients that communicate with your API. For example, frontend, mobile or IoT applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recap, step by step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: import FastAPI
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
  return {"message": "Hello World"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;FastAPI is a Python class that provides all the functionality for your API.&lt;/p&gt;

&lt;h2&gt;
  
  
  NOTE
&lt;/h2&gt;

&lt;h6&gt;
  
  
  FastAPI is a class that inherits directly from Starlette.
&lt;/h6&gt;

&lt;h6&gt;
  
  
  You can use all the Starlette functionality with FastAPI
&lt;/h6&gt;

&lt;h6&gt;
  
  
  too.
&lt;/h6&gt;

&lt;h4&gt;
  
  
  Follow the github link below for a more and detailed example with all the exolanations needed for both beginners and professionals
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/felix715/FastAPIs/blob/main/app.py" rel="noopener noreferrer"&gt;https://github.com/felix715/FastAPIs/blob/main/app.py&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Create a path operation
&lt;/h4&gt;

&lt;h3&gt;
  
  
  Note
&lt;/h3&gt;

&lt;p&gt;There is a stage we have left.....so ensure you visit the above github link for you to get more concepts on the FastAPI  basics.&lt;br&gt;
   &lt;a href="https://github.com/felix715/FastAPIs/blob/main/app.py" rel="noopener noreferrer"&gt;https://github.com/felix715/FastAPIs/blob/main/app.py&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Path
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;"Path"&lt;/em&gt; here refers to the last part of the URL starting from the first /.&lt;br&gt;
So, in a URL like:&lt;br&gt;
       &lt;a href="https://example.com/items/foo" rel="noopener noreferrer"&gt;https://example.com/items/foo&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  ...the path would be:
&lt;/h6&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   /items/foo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A "path" is also commonly called an "endpoint" or a "route".&lt;/p&gt;

&lt;h3&gt;
  
  
  Operation
&lt;/h3&gt;

&lt;p&gt;"Operation" here refers to one of the HTTP "methods".&lt;/p&gt;

&lt;p&gt;One of:&lt;/p&gt;

&lt;p&gt;a.)POST -to create data.&lt;br&gt;
b.)GET -to read data.&lt;br&gt;
c.)PUT -to update data.&lt;br&gt;
d.)DELETE - to delete data.&lt;/p&gt;

&lt;h6&gt;
  
  
  #...and the more exotic ones:
&lt;/h6&gt;

&lt;p&gt;1.OPTIONS&lt;br&gt;
2.HEAD&lt;br&gt;
3.PATCH&lt;br&gt;
4.TRACE&lt;br&gt;
5.In the HTTP protocol, you can communicate to each path using one (or more) of these "methods".&lt;/p&gt;

&lt;h4&gt;
  
  
  N/B
&lt;/h4&gt;

&lt;p&gt;In OpenAPI, each of the HTTP methods is called an "operation".&lt;/p&gt;

&lt;h3&gt;
  
  
  You can also use the other operations:
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; @app.post()
 @app.put()
 @app.delete()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  And the more exotic ones:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; @app.options()
 @app.head()
 @app.patch()
 @app.trace()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
  
  
  defining path operation function:
&lt;/h5&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  path: is /.
  operation: is get.
  function: is the function below the "decorator" (below 
  @app.get("/")).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
  
  
  In returning the content.
&lt;/h5&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  You can return a dict, list, singular values as str, int, 
  etc.
  You can also return Pydantic models (you'll see more about 
  that later).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  SUMMARY
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Import FastAPI.
 Create an app instance.
 Write a path operation decorator (like @app.get("/")).
 Write a path operation function (like def root(): ... above).
 Run the development server (like uvicorn main:app --reload)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  DOCKER
&lt;/h1&gt;

&lt;p&gt;Docker is an open source platform for building, deploying, and managing containerized applications. &lt;br&gt;
It enables developers to package applications into containers standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.&lt;br&gt;
Containers simplify delivery of distributed applications, and have become increasingly popular as organizations shift to cloud-native development and hybrid multicloud environments.&lt;/p&gt;

&lt;p&gt;Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.&lt;/p&gt;
&lt;h3&gt;
  
  
  Major Technologies &amp;amp; Tools used:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Python 3.6+
Fast API
Docker
Postman
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  More tools:
&lt;/h3&gt;
&lt;h5&gt;
  
  
  To be used.
&lt;/h5&gt;

&lt;p&gt;Git and GitHub — Source code management (Version Control System)&lt;/p&gt;

&lt;p&gt;Selenium — Automation testing&lt;/p&gt;

&lt;p&gt;Docker — Software Containerization Platform&lt;/p&gt;

&lt;p&gt;Kubernetes — Container Orchestration tool&lt;/p&gt;

&lt;p&gt;Ansible — Configuration Management and Deployment&lt;/p&gt;

&lt;p&gt;Terraform - An open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.&lt;/p&gt;
&lt;h1&gt;
  
  
  Getting started with Docker.
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Step 1: Setup. ...
 Step 2: Create a Dockerfile. ...
 Step 3: Define services in a Compose file. ...
 Step 4: Build and run your app with Compose. ...
 Step 5: Edit the Compose file to add a bind mount. ...
 Step 6: Re-build and run the app with Compose. ...
 Step 7: Update the application.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  FastAPI in Containers - Docker
&lt;/h3&gt;
&lt;h5&gt;
  
  
  For security....
&lt;/h5&gt;

&lt;p&gt;When deploying FastAPI applications a common approach is to build a Linux container image. It's normally done using Docker. You can then deploy that container image in one of a few possible ways.&lt;/p&gt;

&lt;p&gt;Using Linux containers has several advantages including security, replicability, simplicity, and others.&lt;/p&gt;
&lt;h2&gt;
  
  
  Dockerfile Preview.
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  FROM python:3.9
  WORKDIR /code
  COPY ./requirements.txt /code/requirements.txt
  RUN pip install --no-cache-dir --upgrade -r 
  /code/requirements.txt
  COPY ./app /code/app
  CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "-- 
  port","80"]
  #If running behind a proxy like Nginx or Traefik add -- 
   proxy-headers
 #CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "-- 
 port","80", "--proxy-headers"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What is a Container
&lt;/h3&gt;

&lt;p&gt;Containers (mainly Linux containers) are a very lightweight way to package applications including all their dependencies and necessary files while keeping them isolated from other containers (other applications or components) in the same system.&lt;/p&gt;

&lt;p&gt;Linux containers run using the same Linux kernel of the host (machine, virtual machine, cloud server, etc). This just means that they are very lightweight (compared to full virtual machines emulating an entire operating system).&lt;/p&gt;

&lt;p&gt;This way, containers consume little resources, an amount comparable to running the processes directly (a virtual machine would consume much more).&lt;/p&gt;

&lt;p&gt;Containers also have their own isolated running processes (commonly just one process), file system, and network, simplifying deployment, security, development, etc.&lt;/p&gt;

&lt;p&gt;###What is a Container Image&lt;br&gt;
A container is run from a container image.&lt;/p&gt;

&lt;p&gt;A container image is a static version of all the files, environment variables, and the default command/program that should be present in a container. Static here means that the container image is not running, it's not being executed, it's only the packaged files and metadata.&lt;/p&gt;

&lt;p&gt;In contrast to a "container image" that is the stored static contents, a "container" normally refers to the running instance, the thing that is being executed.&lt;/p&gt;

&lt;p&gt;When the container is started and running (started from a container image) it could create or change files, environment variables, etc. Those changes will exist only in that container, but would not persist in the underlying container image (would not be saved to disk).&lt;/p&gt;

&lt;p&gt;A container image is comparable to the program file and contents, e.g. python and some file main.py.&lt;/p&gt;

&lt;p&gt;And the container itself (in contrast to the container image) is the actual running instance of the image, comparable to a process. In fact, a container is running only when it has a process running (and normally it's only a single process). The container stops when there's no process running in it.&lt;/p&gt;
&lt;h3&gt;
  
  
  Container Images
&lt;/h3&gt;

&lt;p&gt;Docker has been one of the main tools to create and manage container images and containers.&lt;/p&gt;

&lt;p&gt;And there's a public Docker Hub with pre-made official container images for many tools, environments, databases, and applications.&lt;/p&gt;

&lt;p&gt;For example, there's an official Python Image.&lt;/p&gt;

&lt;p&gt;And there are many other images for different things like databases, for example for:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     PostgreSQL
     MySQL
     MongoDB
     Redis, etc.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By using a pre-made container image it's very easy to combine and use different tools. For example, to try out a new database. In most cases, you can use the official images, and just configure them with environment variables.&lt;/p&gt;

&lt;p&gt;That way, in many cases you can learn about containers and Docker and re-use that knowledge with many different tools and components.&lt;/p&gt;

&lt;p&gt;So, you would run multiple containers with different things, like a database, a Python application, a web server with a React frontend application, and connect them together via their internal network.&lt;/p&gt;

&lt;p&gt;All the container management systems (like Docker or Kubernetes) have these networking features integrated into them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Containers and Processes
&lt;/h3&gt;

&lt;p&gt;A container image normally includes in its metadata the default program or command that should be run when the container is started and the parameters to be passed to that program. Very similar to what would be if it was in the command line.&lt;/p&gt;

&lt;p&gt;When a container is started, it will run that command/program (although you can override it and make it run a different command/program).&lt;/p&gt;

&lt;p&gt;A container is running as long as the main process (command or program) is running.&lt;/p&gt;

&lt;p&gt;A container normally has a single process, but it's also possible to start subprocesses from the main process, and that way you will have multiple processes in the same container.&lt;/p&gt;

&lt;p&gt;But it's not possible to have a running container without at least one running process. If the main process stops, the container stops.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build a Docker Image for FastAPI
&lt;/h3&gt;

&lt;p&gt;Okay, let's build something now! &lt;/p&gt;

&lt;p&gt;I'll show you how to build a Docker image for FastAPI from scratch, based on the official Python image.&lt;/p&gt;

&lt;p&gt;This is what you would want to do in most cases, for example:&lt;/p&gt;

&lt;p&gt;Using Kubernetes or similar tools&lt;br&gt;
When running on a Raspberry Pi&lt;br&gt;
Using a cloud service that would run a container image for you, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Package Requirements
&lt;/h3&gt;

&lt;p&gt;You would normally have the package requirements for your application in some file.&lt;/p&gt;

&lt;p&gt;It would depend mainly on the tool you use to install those requirements.&lt;/p&gt;

&lt;p&gt;The most common way to do it is to have a file requirements.txt with the package names and their versions, one per line.&lt;/p&gt;

&lt;p&gt;You would of course use the same ideas you read in About FastAPI versions to set the ranges of versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  For example, your requirements.txt could look like:
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     fastapi&amp;gt;=0.68.0,&amp;lt;0.69.0
     pydantic&amp;gt;=1.8.0,&amp;lt;2.0.0
     uvicorn&amp;gt;=0.15.0,&amp;lt;0.16.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
  
  
  And you would normally install those package dependencies with pip, for example:
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfvylk6s23vwdbcaf6ya.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfvylk6s23vwdbcaf6ya.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create the FastAPI Code
&lt;/h3&gt;

&lt;p&gt;Create an app directory and enter it.&lt;br&gt;
Create an empty file &lt;strong&gt;init&lt;/strong&gt;.py.&lt;br&gt;
Create a main.py file with:&lt;/p&gt;
&lt;h6&gt;
  
  
  Example
&lt;/h6&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     from typing import Optional
     from fastapi import FastAPI
     app = FastAPI()
     @app.get("/")
     def read_root():
        return {"Hello": "World"}
    @app.get("/items/{item_id}")
    def read_item(item_id: int, q: Optional[str] = None):
         return {"item_id": item_id, "q": q}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Dockerfile
&lt;/h4&gt;

&lt;p&gt;Now in the same project directory create a file Dockerfile with:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      FROM python:3.9
      WORKDIR /code
      COPY ./requirements.txt /code/requirements.txt
      RUN pip install --no-cache-dir --upgrade -r 
       /code/requirements.txt
      COPY ./app /code/app
      CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "-- 
        port", "80"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  You should now have a directory structure like:
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       .
       ├── app
       │   ├── __init__.py
       │   └── main.py
       ├── Dockerfile
       └── requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Behind a TLS Termination Proxy
&lt;/h4&gt;

&lt;p&gt;If you are running your container behind a TLS Termination Proxy (load balancer) like Nginx or Traefik, add the option --proxy-headers, this will tell Uvicorn to trust the headers sent by that proxy telling it that the application is running behind HTTPS, etc.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   CMD ["uvicorn", "app.main:app", "--proxy-headers", "--host", "0.0.0.0", "--port", "80"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Docker Cache
&lt;/h4&gt;

&lt;p&gt;There's an important trick in this Dockerfile, we first copy the file with the dependencies alone, not the rest of the code. Let me tell you why is that.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  COPY ./requirements.txt /code/requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Docker and other tools build these container images incrementally, adding one layer on top of the other, starting from the top of the Dockerfile and adding any files created by each of the instructions of the Dockerfile.&lt;/p&gt;

&lt;p&gt;Docker and similar tools also use an internal cache when building the image, if a file hasn't changed since the last time building the container image, then it will re-use the same layer created the last time, instead of copying the file again and creating a new layer from scratch.&lt;/p&gt;

&lt;p&gt;Just avoiding the copy of files doesn't necessarily improve things too much, but because it used the cache for that step, it can use the cache for the next step. For example, it could use the cache for the instruction that installs dependencies with:&lt;br&gt;
         RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt&lt;/p&gt;

&lt;p&gt;The file with the package requirements won't change frequently. So, by copying only that file, Docker will be able to use the cache for that step.&lt;/p&gt;

&lt;p&gt;And then, Docker will be able to use the cache for the next step that downloads and install those dependencies. And here's where we save a lot of time. ...and avoid boredom waiting.&lt;/p&gt;

&lt;p&gt;Downloading and installing the package dependencies could take minutes, but using the cache would take seconds at most.&lt;/p&gt;

&lt;p&gt;And as you would be building the container image again and again during development to check that your code changes are working, there's a lot of accumulated time this would save.&lt;/p&gt;

&lt;p&gt;Then, near the end of the Dockerfile, we copy all the code. As this is what changes most frequently, we put it near the end, because almost always, anything after this step will not be able to use the cache.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     COPY ./app /code/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Build the Docker Image
&lt;/h3&gt;

&lt;p&gt;Now that all the files are in place, let's build the container image.&lt;/p&gt;

&lt;p&gt;Go to the project directory (in where your Dockerfile is, containing your app directory).&lt;br&gt;
Build your FastAPI image:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8gfvct001ti5qwu20qo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8gfvct001ti5qwu20qo.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the . at the end, it's equivalent to ./, it tells Docker the directory to use to build the container image.&lt;br&gt;
In this case, it's the same current directory (.). &lt;/p&gt;
&lt;h4&gt;
  
  
  Start the Docker Container
&lt;/h4&gt;

&lt;p&gt;Run a container based on your image:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhjx9pb8zce6vqbnvbdh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhjx9pb8zce6vqbnvbdh.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h5&gt;
  
  
  Check it
&lt;/h5&gt;

&lt;p&gt;You should be able to check it in your Docker container's URL, for example: &lt;a href="http://192.168.99.100/items/5?q=somequery" rel="noopener noreferrer"&gt;http://192.168.99.100/items/5?q=somequery&lt;/a&gt; or &lt;a href="http://127.0.0.1/items/5?q=somequery" rel="noopener noreferrer"&gt;http://127.0.0.1/items/5?q=somequery&lt;/a&gt; (or equivalent, using your Docker host).&lt;/p&gt;

&lt;p&gt;You will see something like: &lt;br&gt;
         {"item_id": 5, "q": "somequery"}&lt;/p&gt;
&lt;h3&gt;
  
  
  Interactive API docs
&lt;/h3&gt;

&lt;p&gt;Now you can go to &lt;a href="http://192.168.99.100/docs" rel="noopener noreferrer"&gt;http://192.168.99.100/docs&lt;/a&gt; or &lt;a href="http://127.0.0.1/docs" rel="noopener noreferrer"&gt;http://127.0.0.1/docs&lt;/a&gt; (or equivalent, using your Docker host).&lt;/p&gt;

&lt;p&gt;You will see the automatic interactive API documentation (provided by Swagger UI):&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lpx8i7i8wpjrmanfrbu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lpx8i7i8wpjrmanfrbu.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Alternative API docs
&lt;/h4&gt;

&lt;p&gt;And you can also go to &lt;a href="http://192.168.99.100/redoc" rel="noopener noreferrer"&gt;http://192.168.99.100/redoc&lt;/a&gt; or &lt;a href="http://127.0.0.1/redoc" rel="noopener noreferrer"&gt;http://127.0.0.1/redoc&lt;/a&gt; (or equivalent, using your Docker host).&lt;/p&gt;

&lt;p&gt;You will see the alternative automatic documentation (provided by ReDoc): &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwxf7ptdofroxh0t96mz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwxf7ptdofroxh0t96mz.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Build a Docker Image with a Single-File FastAPI
&lt;/h3&gt;

&lt;p&gt;If your FastAPI is a single file, for example, main.py without an ./app directory, your file structure could look like this: &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      .
       ├── Dockerfile
       ├── main.py
       └── requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Then you would just have to change the corresponding paths to copy the file inside the Dockerfile:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        FROM python:3.9  
        WORKDIR /code
        COPY ./requirements.txt /code/requirements.txt
        RUN pip install --no-cache-dir --upgrade -r 
            /code/requirements.txt
        COPY ./main.py /code/
        CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "-- 
        port", "80"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then adjust the Uvicorn command to use the new module main instead of app.main to import the FastAPI object app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Concepts
&lt;/h3&gt;

&lt;p&gt;Let's talk again about some of the same Deployment Concepts in terms of containers.&lt;/p&gt;

&lt;p&gt;Containers are mainly a tool to simplify the process of building and deploying an application, but they don't enforce a particular approach to handle these deployment concepts, and there are several possible strategies.&lt;/p&gt;

&lt;p&gt;The good news is that with each different strategy there's a way to cover all of the deployment concepts. &lt;/p&gt;

&lt;p&gt;Let's review these deployment concepts in terms of containers:&lt;/p&gt;

&lt;p&gt;HTTPS&lt;br&gt;
Running on startup&lt;br&gt;
Restarts&lt;br&gt;
Replication (the number of processes running)&lt;br&gt;
Memory&lt;br&gt;
Previous steps before starting&lt;/p&gt;

&lt;h4&gt;
  
  
  HTTPS
&lt;/h4&gt;

&lt;p&gt;If we focus just on the container image for a FastAPI application (and later the running container), HTTPS normally would be handled externally by another tool.&lt;/p&gt;

&lt;p&gt;It could be another container, for example with Traefik, handling HTTPS and automatic acquisition of certificates.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     Traefik has integrations with Docker, Kubernetes, and 
      others, so it's very easy to set up and configure HTTPS 
     for your containers with it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h6&gt;
  
  
  ##Alternatively, HTTPS could be handled by a cloud provider as one of their services (while still running the application in a container).
&lt;/h6&gt;
&lt;h4&gt;
  
  
  Running on Startup and Restarts
&lt;/h4&gt;

&lt;p&gt;There is normally another tool in charge of starting and running your container.&lt;/p&gt;

&lt;p&gt;It could be Docker directly, Docker Compose, Kubernetes, a cloud service, etc.&lt;/p&gt;

&lt;p&gt;In most (or all) cases, there's a simple option to enable running the container on startup and enabling restarts on failures. For example, in Docker, it's the command line option --restart.&lt;/p&gt;

&lt;p&gt;Without using containers, making applications run on startup and with restarts can be cumbersome and difficult. But when working with containers in most cases that functionality is included by default. &lt;/p&gt;
&lt;h4&gt;
  
  
  Replication - Number of Processes
&lt;/h4&gt;

&lt;p&gt;If you have a cluster of machines with Kubernetes, Docker Swarm Mode, Nomad, or another similar complex system to manage distributed containers on multiple machines, then you will probably want to handle replication at the cluster level instead of using a process manager (like Gunicorn with workers) in each container.&lt;/p&gt;

&lt;p&gt;One of those distributed container management systems like Kubernetes normally has some integrated way of handling replication of containers while still supporting load balancing for the incoming requests. All at the cluster level.&lt;/p&gt;

&lt;p&gt;In those cases, you would probably want to build a Docker image from scratch as explained above, installing your dependencies, and running a single Uvicorn process instead of running something like Gunicorn with Uvicorn workers.&lt;/p&gt;
&lt;h4&gt;
  
  
  Load Balancer
&lt;/h4&gt;

&lt;p&gt;When using containers, you would normally have some component listening on the main port. It could possibly be another container that is also a TLS Termination Proxy to handle HTTPS or some similar tool.&lt;/p&gt;

&lt;p&gt;As this component would take the load of requests and distribute that among the workers in a (hopefully) balanced way, it is also commonly called a Load Balancer.&lt;br&gt;
          The same TLS Termination Proxy component used for HTTPS &lt;br&gt;
          would probably also be a Load Balancer.&lt;br&gt;
And when working with containers, the same system you use to start and manage them would already have internal tools to transmit the network communication (e.g. HTTP requests) from that load balancer (that could also be a TLS Termination Proxy) to the container(s) with your app.&lt;/p&gt;
&lt;h4&gt;
  
  
  One Load Balancer - Multiple Worker Containers
&lt;/h4&gt;

&lt;p&gt;When working with Kubernetes or similar distributed container management systems, using their internal networking mechanisms would allow the single load balancer that is listening on the main port to transmit communication (requests) to possibly multiple containers running your app.&lt;/p&gt;

&lt;p&gt;Each of these containers running your app would normally have just one process (e.g. a Uvicorn process running your FastAPI application). They would all be identical containers, running the same thing, but each with its own process, memory, etc. That way you would take advantage of parallelization in different cores of the CPU, or even in different machines.&lt;/p&gt;

&lt;p&gt;And the distributed container system with the load balancer would distribute the requests to each one of the containers with your app in turns. So, each request could be handled by one of the multiple replicated containers running your app.&lt;/p&gt;

&lt;p&gt;And normally this load balancer would be able to handle requests that go to other apps in your cluster (e.g. to a different domain, or under a different URL path prefix), and would transmit that communication to the right containers for that other application running in your cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  One Process per Container
&lt;/h3&gt;

&lt;p&gt;In this type of scenario, you probably would want to have a single (Uvicorn) process per container, as you would already be handling replication at the cluster level.&lt;/p&gt;

&lt;p&gt;So, in this case, you would not want to have a process manager like Gunicorn with Uvicorn workers, or Uvicorn using its own Uvicorn workers. You would want to have just a single Uvicorn process per container (but probably multiple containers).&lt;/p&gt;

&lt;p&gt;Having another process manager inside the container (as would be with Gunicorn or Uvicorn managing Uvicorn workers) would only add unnecessary complexity that you are most probably already taking care of with your cluster system.&lt;/p&gt;
&lt;h4&gt;
  
  
  Containers with Multiple Processes and Special Cases
&lt;/h4&gt;

&lt;p&gt;Of course, there are special cases where you could want to have a container with a Gunicorn process manager starting several Uvicorn worker processes inside.&lt;/p&gt;

&lt;p&gt;In those cases, you can use the official Docker image that includes Gunicorn as a process manager running multiple Uvicorn worker processes, and some default settings to adjust the number of workers based on the current CPU cores automatically. I'll tell you more about it below in Official Docker Image with Gunicorn - Uvicorn.&lt;/p&gt;

&lt;p&gt;Here are some examples of when that could make sense:&lt;/p&gt;
&lt;h5&gt;
  
  
  A Simple App
&lt;/h5&gt;

&lt;p&gt;You could want a process manager in the container if your application is simple enough that you don't need (at least not yet) to fine-tune the number of processes too much, and you can just use an automated default (with the official Docker image), and you are running it on a single server, not a cluster.&lt;/p&gt;
&lt;h5&gt;
  
  
  Docker Compose
&lt;/h5&gt;

&lt;p&gt;You could be deploying to a single server (not a cluster) with Docker Compose, so you wouldn't have an easy way to manage replication of containers (with Docker Compose) while preserving the shared network and load balancing.&lt;/p&gt;

&lt;p&gt;Then you could want to have a single container with a process manager starting several worker processes inside.&lt;/p&gt;
&lt;h5&gt;
  
  
  Prometheus and Other Reasons
&lt;/h5&gt;

&lt;p&gt;You could also have other reasons that would make it easier to have a single container with multiple processes instead of having multiple containers with a single process in each of them.&lt;/p&gt;

&lt;p&gt;For example (depending on your setup) you could have some tool like a Prometheus exporter in the same container that should have access to each of the requests that come.&lt;/p&gt;

&lt;p&gt;In this case, if you had multiple containers, by default, when Prometheus came to read the metrics, it would get the ones for a single container each time (for the container that handled that particular request), instead of getting the accumulated metrics for all the replicated containers.&lt;/p&gt;

&lt;p&gt;Then, in that case, it could be simpler to have one container with multiple processes, and a local tool (e.g. a Prometheus exporter) on the same container collecting Prometheus metrics for all the internal processes and exposing those metrics on that single container.&lt;/p&gt;

&lt;p&gt;The main point is, none of these are rules written in stone that you have to blindly follow. You can use these ideas to evaluate your own use case and decide what is the best approach for your system, checking out how to manage the concepts of:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     Security - HTTPS
     Running on startup
     Restarts
     Replication (the number of processes running)
     Memory
     Previous steps before starting.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Memory
&lt;/h3&gt;

&lt;p&gt;If you run a single process per container you will have a more or less well-defined, stable, and limited amount of memory consumed by each of those containers (more than one if they are replicated).&lt;/p&gt;

&lt;p&gt;And then you can set those same memory limits and requirements in your configurations for your container management system (for example in Kubernetes). That way it will be able to replicate the containers in the available machines taking into account the amount of memory needed by them, and the amount available in the machines in the cluster.&lt;/p&gt;

&lt;p&gt;If your application is simple, this will probably not be a problem, and you might not need to specify hard memory limits. But if you are using a lot of memory (for example with machine learning models), you should check how much memory you are consuming and adjust the number of containers that runs in each machine (and maybe add more machines to your cluster).&lt;/p&gt;

&lt;p&gt;If you run multiple processes per container (for example with the official Docker image) you will have to make sure that the number of processes started doesn't consume more memory than what is available.&lt;/p&gt;
&lt;h3&gt;
  
  
  Previous Steps Before Starting and Containers
&lt;/h3&gt;

&lt;p&gt;If you are using containers (e.g. Docker, Kubernetes), then there are two main approaches you can use.&lt;/p&gt;
&lt;h4&gt;
  
  
  Multiple Containers
&lt;/h4&gt;

&lt;p&gt;If you have multiple containers, probably each one running a single process (for example, in a Kubernetes cluster), then you would probably want to have a separate container doing the work of the previous steps in a single container, running a single process, before running the replicated worker containers.&lt;br&gt;
         If you are using Kubernetes, this would probably be an Init Container.&lt;br&gt;
If in your use case there's no problem in running those previous steps multiple times in parallel (for example if you are not running database migrations, but just checking if the database is ready yet), then you could also just put them in each container right before starting the main process.&lt;/p&gt;
&lt;h3&gt;
  
  
  Single Container
&lt;/h3&gt;

&lt;p&gt;If you have a simple setup, with a single container that then starts multiple worker processes (or also just one process), then you could run those previous steps in the same container, right before starting the process with the app. The official Docker image supports this internally.&lt;/p&gt;
&lt;h3&gt;
  
  
  Official Docker Image with Gunicorn - Uvicorn
&lt;/h3&gt;

&lt;p&gt;There is an official Docker image that includes Gunicorn running with Uvicorn workers, as detailed in a previous chapter: Server Workers - Gunicorn with Uvicorn.&lt;/p&gt;

&lt;p&gt;This image would be useful mainly in the situations described above in: Containers with Multiple Processes and Special Cases.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        tiangolo/uvicorn-gunicorn-fastapi.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This image has an auto-tuning mechanism included to set the number of worker processes based on the CPU cores available.&lt;/p&gt;

&lt;p&gt;It has sensible defaults, but you can still change and update all the configurations with environment variables or configuration&lt;br&gt;
files.&lt;/p&gt;

&lt;p&gt;It also supports running previous steps before starting with a script.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     To see all the configurations and options, go to the Docker image page: tiangolo/uvicorn-gunicorn-fastapi.&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Number of Processes on the Official Docker Image&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;The number of processes on this image is computed automatically from the CPU cores available.&lt;/p&gt;

&lt;p&gt;This means that it will try to squeeze as much performance from the CPU as possible.&lt;/p&gt;

&lt;p&gt;You can also adjust it with the configurations using environment variables, etc.&lt;/p&gt;

&lt;p&gt;But it also means that as the number of processes depends on the CPU the container is running, the amount of memory consumed will also depend on that.&lt;/p&gt;

&lt;p&gt;So, if your application consumes a lot of memory (for example with machine learning models), and your server has a lot of CPU cores but little memory, then your container could end up trying to use more memory than what is available, and degrading performance a lot (or even crashing). &lt;/p&gt;

&lt;h4&gt;
  
  
  Create a Dockerfile
&lt;/h4&gt;

&lt;p&gt;Here's how you would create a Dockerfile based on this image:&lt;br&gt;
           FROM tiangolo/uvicorn-gunicorn-fastapi:python3.9&lt;br&gt;
           COPY ./requirements.txt /app/requirements.txt&lt;br&gt;
           RUN pip install --no-cache-dir --upgrade -r &lt;br&gt;
           /app/requirements.txt&lt;br&gt;
           COPY ./app /app/app&lt;/p&gt;

&lt;h4&gt;
  
  
  Bigger Applications
&lt;/h4&gt;

&lt;p&gt;If you followed the section about creating Bigger Applications with Multiple Files, your Dockerfile might instead look like:&lt;br&gt;
           FROM tiangolo/uvicorn-gunicorn-fastapi:python3.9&lt;br&gt;
           COPY ./requirements.txt /app/requirements.txt&lt;br&gt;
           RUN pip install --no-cache-dir --upgrade -r &lt;br&gt;
             /app/requirements.txt&lt;br&gt;
           COPY ./app /app/app&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Use
&lt;/h3&gt;

&lt;p&gt;You should probably not use this official base image (or any other similar one) if you are using Kubernetes (or others) and you are already setting replication at the cluster level, with multiple containers. In those cases, you are better off building an image from scratch as described above: Build a Docker Image for FastAPI.&lt;/p&gt;

&lt;p&gt;This image would be useful mainly in the special cases described above in Containers with Multiple Processes and Special Cases. For example, if your application is simple enough that setting a default number of processes based on the CPU works well, you don't want to bother with manually configuring the replication at the cluster level, and you are not running more than one container with your app. Or if you are deploying with Docker Compose, running on a single server, etc.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deploy the Container Image
&lt;/h4&gt;

&lt;p&gt;After having a Container (Docker) Image there are several ways to deploy it.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
            With Docker Compose in a single server&lt;br&gt;
            With a Kubernetes cluster&lt;br&gt;
            With a Docker Swarm Mode cluster&lt;br&gt;
            With another tool like Nomad&lt;br&gt;
            With a cloud service that takes your container image &lt;br&gt;
            and deploys it&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Image with Poetry
&lt;/h3&gt;

&lt;p&gt;If you use Poetry to manage your project's dependencies, you could use Docker multi-stage building:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    FROM python:3.9 as requirements-stage&lt;br&gt;
    WORKDIR /tmp&lt;br&gt;
    RUN pip install poetry&lt;br&gt;
    COPY ./pyproject.toml ./poetry.lock* /tmp/&lt;br&gt;
    RUN poetry export -f requirements.txt --output &lt;br&gt;
      requirements.txt --without-hashes&lt;br&gt;
    FROM python:3.9&lt;br&gt;
    WORKDIR /code&lt;br&gt;
    COPY --from=requirements-stage /tmp/requirements.txt &lt;br&gt;
      /code/requirements.txt&lt;br&gt;
    RUN pip install --no-cache-dir --upgrade -r &lt;br&gt;
       /code/requirements.txt&lt;br&gt;
    COPY ./app /code/app&lt;br&gt;
    CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "-- &lt;br&gt;
       port", "80"]&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Docker stage is a part of a Dockerfile that works as a temporary container image that is only used to generate some files to be used later.&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;The first stage will only be used to install Poetry and to generate the requirements.txt with your project dependencies from Poetry's pyproject.toml file.&lt;/p&gt;

&lt;p&gt;This requirements.txt file will be used with pip later in the next stage.&lt;/p&gt;

&lt;p&gt;In the final container image only the final stage is preserved. The previous stage(s) will be discarded.&lt;/p&gt;

&lt;p&gt;When using Poetry, it would make sense to use Docker multi-stage builds because you don't really need to have Poetry and its dependencies installed in the final container image, you only need to have the generated requirements.txt file to install your project dependencies.&lt;/p&gt;

&lt;p&gt;Then in the next (and final) stage you would build the image more or less in the same way as described before.&lt;/p&gt;

&lt;h3&gt;
  
  
  Behind a TLS Termination Proxy - Poetry
&lt;/h3&gt;

&lt;p&gt;Again, if you are running your container behind a TLS Termination Proxy (load balancer) like Nginx or Traefik, add the option --proxy-headers to the command:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       CMD ["uvicorn", "app.main:app", "--proxy-headers", "--host", "0.0.0.0", "--port", "80"]&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  SUMMARY&lt;br&gt;
&lt;/h5&gt;

&lt;p&gt;Using container systems (e.g. with Docker and Kubernetes) it becomes fairly straightforward to handle all the deployment concepts:&lt;/p&gt;

&lt;p&gt;HTTPS&lt;br&gt;
Running on startup&lt;br&gt;
Restarts&lt;br&gt;
Replication (the number of processes running)&lt;br&gt;
Memory&lt;br&gt;
Previous steps before starting&lt;br&gt;
In most cases, you probably won't want to use any base image, and instead build a container image from scratch one based on the official Python Docker image.&lt;/p&gt;

&lt;p&gt;Taking care of the order of instructions in the Dockerfile and the Docker cache you can minimize build times, to maximize your productivity (and avoid boredom). &lt;/p&gt;

&lt;p&gt;In certain special cases, you might want to use the official Docker image for FastAPI. &lt;/p&gt;

</description>
      <category>beginners</category>
      <category>docker</category>
      <category>fastapibeginners</category>
      <category>pythonbeginners</category>
    </item>
  </channel>
</rss>
