<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arctype Team</title>
    <description>The latest articles on DEV Community by Arctype Team (@rettx).</description>
    <link>https://dev.to/rettx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rettx"/>
    <language>en</language>
    <item>
      <title>What is the secure_file_priv Variable in MySQL?</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Tue, 23 Aug 2022 21:22:55 +0000</pubDate>
      <link>https://dev.to/arctype/what-is-the-securefilepriv-variable-in-mysql-5eh9</link>
      <guid>https://dev.to/arctype/what-is-the-securefilepriv-variable-in-mysql-5eh9</guid>
      <description>&lt;p&gt;If you have ever dived deeper into the settings letting you unleash the full power of MySQL, you might already come across a couple of settings letting you perform a variety of different tasks – one of such settings is the &lt;code&gt;secure_file_priv&lt;/code&gt; variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the secure_file_priv Variable and How Does it Work?
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;secure_file_priv&lt;/code&gt; variable is heavily associated with bulk importing of data inside of your InnoDB-based database instances. &lt;a href="https://dev.to__GHOST_URL__/mysql-load-data-infile/"&gt;Remember how we said that &lt;code&gt;LOAD DATA INFILE&lt;/code&gt; is significantly faster than &lt;code&gt;INSERT&lt;/code&gt; statements due to the fact that it comes with many so-called “bells and whistles” unique to itself&lt;/a&gt;? Part of that magic is &lt;code&gt;load_data_infile&lt;/code&gt;. The variable usually resides in your my.cnf file (which itself is located in your &lt;code&gt;/var/bin/mysql&lt;/code&gt; directory) and looks something like the following (the following example refers to the variable being used in Windows environments):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;secure_file_priv=”d:/wamp64/tmp”&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This variable, simply put, puts a restraint on the directories that can be used to load data into your MySQL database instance. In other words, if you run a &lt;code&gt;LOAD DATA INFILE&lt;/code&gt; query and the file you load into your InnoDB-based database instance does not reside in this directory, MySQL will come back with an error like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To overcome this error, you either need to remote the &lt;code&gt;--secure-file-priv&lt;/code&gt; setting from your my.cnf file, or load your data from a directory specified in the value of the variable. Once you do so, your data should be loaded in without any issues!&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Concerns
&lt;/h2&gt;

&lt;p&gt;Aside from loading data into your MySQL database instances using &lt;code&gt;LOAD DATA INFILE&lt;/code&gt; and making use of this privilege, the value of such a variable might also be relevant when exporting data out of MySQL using &lt;code&gt;SELECT … INTO OUTFILE&lt;/code&gt;. this variable can also be used to disable import and export operations using &lt;code&gt;LOAD DATA INFILE&lt;/code&gt; and &lt;code&gt;SELECT ... INTO OUTFILE&lt;/code&gt;: simply set this variable to &lt;code&gt;NULL&lt;/code&gt; in such a case. In general though, you can also run a query like &lt;code&gt;SHOW VARIABLES LIKE ‘secure_file_priv’&lt;/code&gt; or &lt;code&gt;SELECT @@secure_file_priv&lt;/code&gt; in order to observe the name of the directory that this variable is set to. Here is what Arctype would come up with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vITK7lcf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vITK7lcf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-8.png" alt="" width="880" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the directory is indeed set to &lt;code&gt;”d:/wamp64/tmp”&lt;/code&gt; as in the example we gave you above. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;secure_file_priv&lt;/code&gt; variable denotes the directory from which data files can be loaded into a given database instance or to which directory data can be written when exporting data when using &lt;code&gt;LOAD DATA INFILE&lt;/code&gt; or &lt;code&gt;SELECT ... INTO OUTFILE&lt;/code&gt;. MySQL allows you to observe the value of this parameter at all times by running either &lt;code&gt;SELECT @@secure_file_priv&lt;/code&gt; queries or queries like &lt;code&gt;SHOW QUERY VARIABLES LIKE ‘secure_file_priv’&lt;/code&gt;. The value of this parameter is worth keeping an eye on at all times: even if you think that you would not ever need to load bigger data sets into a given database instance or export such data sets from it.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>mysql</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Use SQL to Analyze And Visualize Data?</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Thu, 18 Aug 2022 21:39:25 +0000</pubDate>
      <link>https://dev.to/arctype/how-to-use-sql-to-analyze-and-visualize-data-ced</link>
      <guid>https://dev.to/arctype/how-to-use-sql-to-analyze-and-visualize-data-ced</guid>
      <description>&lt;p&gt;The goal of every business is to perform efficiently, maximize profit and make strategically-guided decisions. In the current business world, the competition is high and every business needs every edge and advantage they can get to improve their odds of success.&lt;/p&gt;

&lt;p&gt;Companies that want to stay in business and thrive in the current business world must employ data analysis. Data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Data Analysis?
&lt;/h2&gt;

&lt;p&gt;Data analysis can be described as the process of cleaning, transforming, and processing raw data to extract actionable and relevant information that helps businesses make informed decisions. The process of data analysis provides useful insights and statistics that can be presented in tables, charts, and graphs.&lt;/p&gt;

&lt;p&gt;In this article, you will learn how to get useful information from a collection of data by running SQL queries. Also, you will learn how to present that information using graphs and charts. I will be using a dataset of different stores of a supermarket company, PostgreSQL, and the Arctype SQL client.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is PostgreSQL and Arctype?
&lt;/h2&gt;

&lt;p&gt;PostgreSQL, also known as Postgres, is a free and open-source relational database management system. Arctype is a fast and easy-to-use database management tool for writing SQL queries, building dashboards, and sharing data with your team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up PostgreSQL
&lt;/h3&gt;

&lt;p&gt;To use Postgres in this tutorial, navigate to the &lt;a href="https://www.postgresql.org/download/"&gt;Postgres download page&lt;/a&gt; and download it for your operating system. It is available for Linux, macOS, and Windows.&lt;/p&gt;

&lt;p&gt;After installing Postgres on your machine, run the command below to confirm the installation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should be able to see the version of Postgres installed into your system. Now run the command shown below to open the Postgres prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -u postgres psql postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once Postgres has opened on the command line, run the command below to set a password for postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\password postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To quit the Postgres prompt, run the command shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\q
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us now create a database called &lt;em&gt;“supermarket”&lt;/em&gt; in PostgreSQL that will be used to store data from the supermarket stores CSV file. We run the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE supermarket;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the command shown below to see if the database you created is on the list of databases in Postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\l
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the screenshot below, the &lt;em&gt;supermarket&lt;/em&gt; database was created in PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BX9taBV---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/at_lleFo7-crECRgrlOD6_I95JgM1T8EOk1lERJGgCvwpHQgbIPEJMVs5JL5rInCd1A7tsmJ7cMwymnu4PbLwpdBt9ecQLQZZUnFJAXWfEdTVeNKErZq91HQr1Lj-p8fpXnYrhrijTlHf1j_" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BX9taBV---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/at_lleFo7-crECRgrlOD6_I95JgM1T8EOk1lERJGgCvwpHQgbIPEJMVs5JL5rInCd1A7tsmJ7cMwymnu4PbLwpdBt9ecQLQZZUnFJAXWfEdTVeNKErZq91HQr1Lj-p8fpXnYrhrijTlHf1j_" alt="" width="880" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Arctype SQL Client
&lt;/h2&gt;

&lt;p&gt;To use Arctype, navigate to &lt;a href="https://arctype.com"&gt;the Arctype homepage&lt;/a&gt; and download the Arctype app on your machine. The application is available on &lt;a href="https://arctype.com/api/download/windows"&gt;Windows&lt;/a&gt;, &lt;a href="https://arctype.com/api/download/mac"&gt;MacOS&lt;/a&gt;, and &lt;a href="https://arctype.com/api/download/linux"&gt;Linux&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S7crDvFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/g4Wshi5UHGpYNrW6zWhucPuz84XmEY7vduyUVlpqSXtcDw4VpZbeTK1tS8Q0_x8t5DOLs_OZ9DDCuAHErtYhE43z2jpjzvGolMZy5fpxdG2O4ZHeJd-rJktUD7mmn-9sWJ1us_asfGWruxg2" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S7crDvFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/g4Wshi5UHGpYNrW6zWhucPuz84XmEY7vduyUVlpqSXtcDw4VpZbeTK1tS8Q0_x8t5DOLs_OZ9DDCuAHErtYhE43z2jpjzvGolMZy5fpxdG2O4ZHeJd-rJktUD7mmn-9sWJ1us_asfGWruxg2" alt="" width="880" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the app has been installed, open it and you will be prompted to create an account as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KP8LOJxv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/NuO_Nr0SNYcMk8ym5qgqSwPzGR-sL_QcT96-Xdor6CyZUhNerPXrAko_yu7lF7SJmnlun5HKtIx9Mxx1Qxa2KcTbtimqO-kxjKvPtc2ZgUr2c3tCVKCnql0h-fZYxnqCmyeyo6LfeSV7NfZH" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KP8LOJxv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/NuO_Nr0SNYcMk8ym5qgqSwPzGR-sL_QcT96-Xdor6CyZUhNerPXrAko_yu7lF7SJmnlun5HKtIx9Mxx1Qxa2KcTbtimqO-kxjKvPtc2ZgUr2c3tCVKCnql0h-fZYxnqCmyeyo6LfeSV7NfZH" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating the Arctype SQL Client With Postgres
&lt;/h2&gt;

&lt;p&gt;Once you have created an account, you will be prompted to select your database. In this case, select Postgres.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yh5kSscj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/kMJyE5muS8ztJWfSQGbZPRF3FPFzdb-X2WOsHMHixCTwZi4zL8IycNqKrg4Cf8xGreh5IgGeIEgi2kPOm3x-hHReZnJAVxHrKUt5xkET9z_CH3w9z24TbPNdYE9eYZWC9hmg9a6oUg8XCYpR" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yh5kSscj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/kMJyE5muS8ztJWfSQGbZPRF3FPFzdb-X2WOsHMHixCTwZi4zL8IycNqKrg4Cf8xGreh5IgGeIEgi2kPOm3x-hHReZnJAVxHrKUt5xkET9z_CH3w9z24TbPNdYE9eYZWC9hmg9a6oUg8XCYpR" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to add Postgres credentials to connect Arctype to your Postgres database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0fVdGlBC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/q9DMdEzNb_SOz_CUmJIijcSzVr-Sgi-DUX5RGH85owGc83_B6ALVWXG0MmqYlWWBVqaBG0PbKiHAo4uNupaXrNFJoeT_7AQzaRkPlQjb1IqIby9axr3YRDT3bHiknJKY2MtgVlfDhyyzhWjJ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0fVdGlBC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/q9DMdEzNb_SOz_CUmJIijcSzVr-Sgi-DUX5RGH85owGc83_B6ALVWXG0MmqYlWWBVqaBG0PbKiHAo4uNupaXrNFJoeT_7AQzaRkPlQjb1IqIby9axr3YRDT3bHiknJKY2MtgVlfDhyyzhWjJ" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill in the fields according to the example shown below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: Supermarkets Dataset&lt;/li&gt;
&lt;li&gt;Host: 127.0.0.1&lt;/li&gt;
&lt;li&gt;Port: 5432&lt;/li&gt;
&lt;li&gt;User: postgres&lt;/li&gt;
&lt;li&gt;Password: &lt;em&gt;Postgres password&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Database: supermarket&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Use the credentials you created when creating a database in the Postgres prompt.&lt;/p&gt;

&lt;p&gt;Once you have entered the Postgres credentials, click the “Test Connection” button at the bottom to make sure credentials are correct. If the credentials are correct, click the “save” button to the right of the “Test Connection” button to complete Postgres and Arctype SQL client integration as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c70yLNZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/JoYLMIXksXqrihbbkzHA_lRgDis9FQk-tkKPBbHucQUVdxzfGnvHcHtCXMKNIA-lGBPAdxOdZobbCGUya4PSOMZvIomYRJUI1fv3io61F3Dw05uYGGltZZt0Lj1M5NVy0d_4ExPwpD53XnBg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c70yLNZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/JoYLMIXksXqrihbbkzHA_lRgDis9FQk-tkKPBbHucQUVdxzfGnvHcHtCXMKNIA-lGBPAdxOdZobbCGUya4PSOMZvIomYRJUI1fv3io61F3Dw05uYGGltZZt0Lj1M5NVy0d_4ExPwpD53XnBg" alt="" width="880" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Importing CSV Files Into Postgres Using Arctype
&lt;/h2&gt;

&lt;p&gt;We will be importing the supermarkets stores dataset CSV file that you can download by navigating to this &lt;a href="https://www.kaggle.com/datasets/surajjha101/stores-area-and-sales-data"&gt;Kaggle web page&lt;/a&gt; as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fXvURZuG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/GM1rvwzHNhHwtYYe741Y3Ef3pWdpbHnoJwEkRIVUd9fn1TTJ6cOdyKr5d4D2pvoXdtX_CNV9dw8x1MB011wd_E55cUGqW2oil5Bksrmh4xCTlaA-lwng4FW-Jb4c5p8yzFkVQC0z8_T3OUu4" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fXvURZuG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/GM1rvwzHNhHwtYYe741Y3Ef3pWdpbHnoJwEkRIVUd9fn1TTJ6cOdyKr5d4D2pvoXdtX_CNV9dw8x1MB011wd_E55cUGqW2oil5Bksrmh4xCTlaA-lwng4FW-Jb4c5p8yzFkVQC0z8_T3OUu4" alt="" width="880" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To import the CSV file into Postgres using Arctype, navigate to the “Tables” view. Click on the three dots next to the table search field. Then select “Import CSV to Table” as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cl0nXfzU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/hiFVTSiZU0ra-IpJ7_Y3YVqiGlyZH4muub3F0ZdU-WPUx42_x0U62wmPdyYPY8j1g-2I1uevRcA85EGiBbFvwMNJz87rTh9cIbIhQgzYbPbQD2f954lhPbQ_EfA6E31EUqbEXOdBnPaKnJXA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cl0nXfzU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/hiFVTSiZU0ra-IpJ7_Y3YVqiGlyZH4muub3F0ZdU-WPUx42_x0U62wmPdyYPY8j1g-2I1uevRcA85EGiBbFvwMNJz87rTh9cIbIhQgzYbPbQD2f954lhPbQ_EfA6E31EUqbEXOdBnPaKnJXA" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next use the file selection window that appears to select the CSV file you downloaded. Select your file and click the “Open” button as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J2EqYA98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/gSDlAzXWS27I2o-bLehtkAw8cjP2MLAWedFGrLDWqzFW-eHqQTBt0hr-o2CAhDjLIIv1MbjgesxzcwzLD9tMVSXuPi5laHUblVadzPrrxlfEyPMPURupGk5mLaCcgcrPOlCABpV16GmRbCW-" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J2EqYA98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/gSDlAzXWS27I2o-bLehtkAw8cjP2MLAWedFGrLDWqzFW-eHqQTBt0hr-o2CAhDjLIIv1MbjgesxzcwzLD9tMVSXuPi5laHUblVadzPrrxlfEyPMPURupGk5mLaCcgcrPOlCABpV16GmRbCW-" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have selected and opened the CSV file, Arctype will display the first ten rows of the CSV file data for preview. If everything looks correct, click the “Accept” button as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8M-s0A3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/PTn8S7hUx9H3P83vc2K4MKDlfi1VQrhL46v1aEvev3wX_tHCSWpATm85omqtYG5RtMIPpXoEpizjW6yLOmJqMl3RZWuXvGxzNx4xLDm2j-VSqV-ikgv616RLsLGaBQBBDixitD1kYyJJHUi0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8M-s0A3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/PTn8S7hUx9H3P83vc2K4MKDlfi1VQrhL46v1aEvev3wX_tHCSWpATm85omqtYG5RtMIPpXoEpizjW6yLOmJqMl3RZWuXvGxzNx4xLDm2j-VSqV-ikgv616RLsLGaBQBBDixitD1kYyJJHUi0" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking the “Accept” button, you will be redirected to a configuration window as shown below. The window allows you to determine where you want the data to be stored and to select the schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fIRLxiOA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/Qk7AXy4QQz4b3KcJMDYuQbkGUWy_vd9eGBoIBcbWNGCV_Wh2bZgQ0CF_1i7MaBfvmAhJFBKKM7DW1kgcqEjq_zg-pLaJ_lLU_twQhAofOTvQP40iEOt8dZtaJJX96Ww2Q9lreY-5vYSVEtMK" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fIRLxiOA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/Qk7AXy4QQz4b3KcJMDYuQbkGUWy_vd9eGBoIBcbWNGCV_Wh2bZgQ0CF_1i7MaBfvmAhJFBKKM7DW1kgcqEjq_zg-pLaJ_lLU_twQhAofOTvQP40iEOt8dZtaJJX96Ww2Q9lreY-5vYSVEtMK" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything looks good, click the “Import CSV” button to start importing your CSV file. Open the table data was successfully imported,everything should look as shown below.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1qLKVue0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/W3h9X2CpGcUe7-YRhUkECYygK_z5Ha29SFI1HaJGQvZNjazfEgY0ivKkVJ9RfseP5M13Pl5XEjhPDb994FPINEs7UZE0Alir5wR7jA6mxqctGxzc7dIM9dTdClOLoiu3VwThpFKOtXGBcfrx" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1qLKVue0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh3.googleusercontent.com/W3h9X2CpGcUe7-YRhUkECYygK_z5Ha29SFI1HaJGQvZNjazfEgY0ivKkVJ9RfseP5M13Pl5XEjhPDb994FPINEs7UZE0Alir5wR7jA6mxqctGxzc7dIM9dTdClOLoiu3VwThpFKOtXGBcfrx" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running SQL Queries With Arctype
&lt;/h2&gt;

&lt;p&gt;SQL queries allow you to display or modify data from databases. In this section, I will show you some queries you can run to create tables, charts or graphs to visualize the supermarkets data.&lt;/p&gt;

&lt;p&gt;Let us assume you are the owner of the supermarkets. You would want to know interesting stats such as the top 10 performing stores by sales. To get such insights, you can run a SQL query that creates a table of top ten performing stores. Then you can visualize the data by creating a graph or a chart.&lt;/p&gt;

&lt;p&gt;To run a SQL query using Arctype, click the “Create query” button on the top right of the Arctype window. A screen will then appear where you can run a query as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lUUyZezz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/flInsQqFcx5nNdEcl7zLRigpj2Q6tSR1KrSdaXa52B-y317ReTRgoSM9Uwx5ya1O3z1n-eiZNPEGNfNZq7c0ASYI3sA6Qxv5iub3-s6sKNSrEogAgraQIZpgxD_AaPP-cKyF6ahsh71uwiaz" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lUUyZezz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/flInsQqFcx5nNdEcl7zLRigpj2Q6tSR1KrSdaXa52B-y317ReTRgoSM9Uwx5ya1O3z1n-eiZNPEGNfNZq7c0ASYI3sA6Qxv5iub3-s6sKNSrEogAgraQIZpgxD_AaPP-cKyF6ahsh71uwiaz" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give your query a descriptive name like “Top 10 Stores By Sales.” Then write the following SQL query to determine the top 10 stores by sales from the stores data.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SELECT store_area, store_sales AS Sales FROM stores GROUP BY store_id, store_sales ORDER BY store_sales DESC LIMIT 10;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The query selects data in the column &lt;em&gt;store_area&lt;/em&gt; and &lt;em&gt;store_sales&lt;/em&gt; from the &lt;em&gt;stores&lt;/em&gt; table where &lt;em&gt;store_sales&lt;/em&gt; is renamed to Sales. The data then gets grouped by the columns named &lt;em&gt;store_id&lt;/em&gt; and &lt;em&gt;store_sales&lt;/em&gt; (Sales.) After that, the data is ordered by &lt;em&gt;store_sales&lt;/em&gt; where sales are arranged from highest to lowest. The query limits the results to the  top 10 selling store areas.&lt;/p&gt;

&lt;p&gt;If everything goes well, the query should create a table with two columns named &lt;em&gt;store_area&lt;/em&gt; and &lt;em&gt;sales&lt;/em&gt; as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BGKl7N8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/I82UKTCf1HIt7SPPG3sW7dI-zyBlPIA7AA_IQX7MaWUX547uBVUedpoutvOf0mYBUQcxNeB2dMJIJ3sVAyGBhEy99YANqKdn4iXJNgQHYmP_qu0430qcg50Y0w0V0K5ieFZLYULBStXcY8p4" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BGKl7N8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/I82UKTCf1HIt7SPPG3sW7dI-zyBlPIA7AA_IQX7MaWUX547uBVUedpoutvOf0mYBUQcxNeB2dMJIJ3sVAyGBhEy99YANqKdn4iXJNgQHYmP_qu0430qcg50Y0w0V0K5ieFZLYULBStXcY8p4" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Charts Using Arctype
&lt;/h2&gt;

&lt;p&gt;Let us now create a chart from the table created by the SQL query. To create a chart using Arctype, select the “Chart” option at the bottom right of the SQL editor as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JYVctF1b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/Sfa2nf3s5XxX_3rg1NYhkWlC4n5wLI6cwij1eiYgqXEFXZAmD7825DhuWdw4DmezDoGtzvx9oSy-isnw676CSjfKGMs2OgI6aFgLyGnj4y5R-gFWqS4x451gLn5bia7qZN_ND2cyEU8DEm7y" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JYVctF1b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/Sfa2nf3s5XxX_3rg1NYhkWlC4n5wLI6cwij1eiYgqXEFXZAmD7825DhuWdw4DmezDoGtzvx9oSy-isnw676CSjfKGMs2OgI6aFgLyGnj4y5R-gFWqS4x451gLn5bia7qZN_ND2cyEU8DEm7y" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the right side of the screen, you can select the type of chart you want. In this case, we select “Bar Chart.” Then below the “Select Chart Type” option, you have the “Specify Columns For The Chart” option. In this section, drag the &lt;em&gt;store_area&lt;/em&gt; column to the X-axis column and the &lt;em&gt;sales&lt;/em&gt; column to the Y-axis column. &lt;a href="https://arctype.com"&gt;Arctype&lt;/a&gt; should help you  come up with a chart similar to the one shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gfPjYqzT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/UQ3LJMH6yKA0VvOX22RdQYoGFOMjYu5NOn6YWk9l1a7J_fcNoJV2jlnGIKjH4GZ4V4Hfp0M03HlTHCFB-YaesBOtgTdk7KWnYZx56g_UdRVIQKjOk1oeHpJm881rb_E_Gthx-nqO3ExQiuo-" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gfPjYqzT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/UQ3LJMH6yKA0VvOX22RdQYoGFOMjYu5NOn6YWk9l1a7J_fcNoJV2jlnGIKjH4GZ4V4Hfp0M03HlTHCFB-YaesBOtgTdk7KWnYZx56g_UdRVIQKjOk1oeHpJm881rb_E_Gthx-nqO3ExQiuo-" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you have learned how to create a database in PostgreSQL, import a CSV file into the DBMS and how to run SQL queries to analyse data. Now, I suggest you  try to create different SQL queries to come up with some more exciting insights and use &lt;a href="https://arctype.com"&gt;Arctype&lt;/a&gt; to create charts.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>datascience</category>
      <category>database</category>
    </item>
    <item>
      <title>CHAR vs. VARCHAR: What’s the Difference?</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Tue, 16 Aug 2022 22:33:50 +0000</pubDate>
      <link>https://dev.to/arctype/char-vs-varchar-whats-the-difference-3h2d</link>
      <guid>https://dev.to/arctype/char-vs-varchar-whats-the-difference-3h2d</guid>
      <description>&lt;p&gt;If you’ve found yourself in the MySQL space even for a little while, you already know that MySQL and other flavors of it offer a couple of data types for developers to choose from. A wide majority of the most popular data types offered by MySQL are related to text – in this space, MySQL offers &lt;code&gt;VARCHAR&lt;/code&gt;, &lt;code&gt;CHAR&lt;/code&gt;, &lt;code&gt;TINYTEXT&lt;/code&gt;, &lt;code&gt;TEXT&lt;/code&gt;, and other options. In this blog, we are going to go through the similarities and differences between two of some of the most popular options: the aforementioned &lt;code&gt;VARCHAR&lt;/code&gt; and &lt;code&gt;CHAR&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is CHAR and VARCHAR?
&lt;/h2&gt;

&lt;p&gt;Before diving deeper into the differences between the two data types, we must first make you understand what they are, right? Well, everything in this space is pretty simple because the names of the data types themselves suggest what they are able to do – &lt;code&gt;CHAR&lt;/code&gt; stands for “character values” and &lt;code&gt;VARCHAR&lt;/code&gt; stands for “variable character values.”&lt;/p&gt;

&lt;p&gt;Both &lt;code&gt;CHAR&lt;/code&gt; and &lt;code&gt;VARCHAR&lt;/code&gt; differ from other similar data types by the fact that they both are considered to be fixed-length data types: we create both &lt;code&gt;CHAR&lt;/code&gt; and &lt;code&gt;VARCHAR&lt;/code&gt; data types with a “fixed” length that can range from 0 to 255 characters in size if &lt;code&gt;CHAR&lt;/code&gt; is being used or from 1 to 65,535 characters if &lt;code&gt;VARCHAR&lt;/code&gt; is in use. The bigger our data length is, the more space our characters will occupy (this will probably won’t be a very large problem, however, if your data grows, you might &lt;a href="https://dev.to__GHOST_URL__/mysql-schema/"&gt;want to keep an eye out on both the structure of your database&lt;/a&gt; and &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;learn how to work with big data sets&lt;/a&gt;), though this is common sense. What’s not sometimes very widely discussed though is the core differences between the two data types – after all, even their names are similar, so what’s the deal? Bear with us while we guide you through their world...&lt;/p&gt;

&lt;h2&gt;
  
  
  CHAR vs. VARCHAR
&lt;/h2&gt;

&lt;p&gt;At first glance, both &lt;code&gt;CHAR&lt;/code&gt; and &lt;code&gt;VARCHAR&lt;/code&gt; might seem very similar – after all, they are defined in exactly the same fashion, aren’t they? Demo &lt;code&gt;CHAR(5)&lt;/code&gt; would create a column named demo with a &lt;code&gt;CHAR&lt;/code&gt; data type with a length of &lt;code&gt;5&lt;/code&gt; on it, and &lt;code&gt;demo VARCHAR(5)&lt;/code&gt; would create a column named &lt;code&gt;demo&lt;/code&gt; with a &lt;code&gt;VARCHAR&lt;/code&gt; data type.&lt;/p&gt;

&lt;p&gt;However, when we dive deeper into these, we would notice that MySQL hides their differences inside of its architecture:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CHAR&lt;/th&gt;
&lt;th&gt;VARCHAR&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Fixed length&lt;/td&gt;
&lt;td&gt;Variable length&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Always takes the same amount of space regardless of what's stored&lt;/td&gt;
&lt;td&gt;Amount of space taken varies: either 1 or 2 bytes + data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Can hold a maximum of 255 characters&lt;/td&gt;
&lt;td&gt;Can hold a maximum of 65,535 characters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uses static memory allocation&lt;/td&gt;
&lt;td&gt;Uses dynamic memory allocation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As you can see, some of the differences are apparent, some are not – for example, we would imply that &lt;code&gt;VARCHAR&lt;/code&gt; is a variable-length data storage type because of the name, but did you know that &lt;code&gt;CHAR&lt;/code&gt; can only hold 255 characters? One of the reasons for that might be because of the fact that &lt;code&gt;CHAR&lt;/code&gt; is known to always take up exactly the same amount of storage space – how can MySQL ensure that a column with a &lt;code&gt;CHAR&lt;/code&gt; data type will not take up 5MB of storage if it holds 50,000 characters? With 255, that’s easy to do – the length of a &lt;code&gt;CHAR&lt;/code&gt; column is defined when we create a table and it doesn’t change no matter what happens. This has its own downsides though – we can’t store very large text values. For you to imagine what &lt;code&gt;CHAR&lt;/code&gt; can handle, head over to the Lorem Ipsum generator and observe the paragraphs. &lt;code&gt;CHAR&lt;/code&gt; can handle approximately one-third of one paragraph – in other words, four-five sentences.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Eg8bD5He--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Eg8bD5He--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-53.png" alt="" width="880" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have emboldened the text that &lt;code&gt;CHAR&lt;/code&gt; could handle without issues in blue – see the problem? Such a data type might be a fit if we are storing very small amounts of text, but not so if we are working with larger-scale systems or even if we need to handle a couple of login forms (remember, we need to hash our passwords to make them not humanly readable – do we really want to take the risk of not being able to store them?)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In a nutshell, &lt;code&gt;CHAR&lt;/code&gt; is a fit if we are strapped on storage space and do not plan on storing much data inside of our column.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;VARCHAR&lt;/code&gt; however, that’s a different story altogether – as you can understand, it’s a variable-length data type meaning that its rows can get up to 65,535 bytes of data in size. The amount of characters that this data type can handle is not limited to 255, though that’s what pretty much every developer uses if they’re dealing with data – do note that &lt;code&gt;VARCHAR&lt;/code&gt; data types can also easily hold 6,000, 10,000 characters, etc., but it all depends on the collation and character set that is being used – for details on that topic, &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/column-count-limit.html"&gt;refer to the docs&lt;/a&gt;, but the bottom line is this – we can create tables with &lt;code&gt;VARCHAR&lt;/code&gt; data types on columns so long as the row size doesn’t exceed 65,535 – simple as that. For example, if we have 7 columns and 6 of them are of 10,000 characters in size and the seventh has 6,000 we are in trouble, because the row size combined would exceed 65,535 – we would have 66,000 instead. You get the point.&lt;/p&gt;

&lt;p&gt;Also, keep in mind that the amount of space such a data type takes up on the disk varies from 1 to 2 bytes – keep that in mind when designing systems with bigger sets of data, but other than that, do not worry too much – modern hard and solid state drives don’t make this a problem at all. Use them and you should be fine.&lt;/p&gt;

&lt;p&gt;The last thing is that &lt;code&gt;CHAR&lt;/code&gt; differs from &lt;code&gt;VARCHAR&lt;/code&gt; in the amount it allocates memory for its data – &lt;code&gt;CHAR&lt;/code&gt; allocates memory statically meaning that once the memory is allocated, its size cannot change, and &lt;code&gt;VARCHAR&lt;/code&gt; does things the other way around – it allocates memory at runtime.&lt;/p&gt;

&lt;p&gt;The bottom line is this though – &lt;strong&gt;&lt;code&gt;VARCHAR&lt;/code&gt; is mostly used to work with medium-sized to big projects and &lt;code&gt;CHAR&lt;/code&gt; is generally only used when we are strapped in size or our project is very small.&lt;/strong&gt; Sacrificing a couple thousand bytes here or there isn’t an issue that should worry a database administrator – disk space is cheap and easy to acquire. However, with that being said, do keep in mind that if we are working with bigger sets of data, we should take care of the size of our data types accordingly. Other than that though, there shouldn’t be any major issues!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Both &lt;code&gt;CHAR&lt;/code&gt; and &lt;code&gt;VARCHAR&lt;/code&gt; are data types that are used very frequently – however, to use them properly, we must understand that &lt;code&gt;CHAR&lt;/code&gt; is more of a fit when we deal with fixed-length data that does not exceed 255 characters, and &lt;code&gt;VARCHAR&lt;/code&gt; is a fit when we deal with projects of a bigger scale. The amount of space these data types occupy does vary as well, however, it shouldn’t cause many issues when we aren’t working with bigger data sets. If you find yourself working with them, though, head over to &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;our big data tutorial&lt;/a&gt;, and we will walk you through everything you need to know.&lt;/p&gt;

&lt;p&gt;If you liked what you’ve read so far, we have good news – since &lt;a href="https://arctype.com"&gt;Arctype is a SQL client&lt;/a&gt;, it can very easily help you take care of the data types you have no matter what they are – simply glance over at the left side of the tool and expand the table you are dealing with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rjcSg-Zn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rjcSg-Zn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-54.png" alt="" width="624" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See how easy everything becomes when &lt;a href="https://arctype.com/"&gt;Arctype&lt;/a&gt; is in use? Give it a try today, run a couple of queries (hint: the “/” sign would return a bunch of statements that you can use to build your queries if you’re not sure where to begin), and we will see you in the next one!&lt;/p&gt;

</description>
      <category>guide</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>MySQL’s UTF-8 Isn’t Real</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Thu, 11 Aug 2022 21:46:00 +0000</pubDate>
      <link>https://dev.to/arctype/mysqls-utf-8-isnt-real-4352</link>
      <guid>https://dev.to/arctype/mysqls-utf-8-isnt-real-4352</guid>
      <description>&lt;p&gt;If you are familiar with MySQL, you probably already know that it comes with multiple character encodings. One of the main encodings in the MySQL world and on the web in general is UTF-8 – it is used in almost every web and mobile application, and is widely considered to be the “default” option as far as MySQL is concerned. UTF-8 also supports multiple character sets and has a couple of other features unique to itself: in this blog, we are going to go through them and also we are going to go through one feature in particular – the fact that MySQL’s “UTF-8” is not considered to be the “real” UTF-8. Confused? Read on!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is UTF-8?
&lt;/h2&gt;

&lt;p&gt;To start with, UTF-8 is one of the most common character encodings. In UTF-8, each character that we have is represented by a range of one to four bytes. As such, we have a couple of character sets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;utf8 which, in previous times, was considered the “de-facto” standard for MySQL in the past. Essentially, utf8 can also be considered to be the “alias” for utf8mb3.&lt;/li&gt;
&lt;li&gt;utf8mb3 which uses one to three bytes per character.&lt;/li&gt;
&lt;li&gt;utf8mb4 which uses one to four bytes per character.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;UTF8 was the default character set in the past when MySQL was starting out and everything was great. However, talk to MySQL database administrators these days, and you will quickly realize that now that is no longer the case. Simply put, utf8, as such, is not the default character set anymore – utf8mb4 is.&lt;/p&gt;

&lt;h2&gt;
  
  
  utf8 vs. utf8mb4
&lt;/h2&gt;

&lt;p&gt;The core reason between the separation of utf8 and utf8mb4 is that UTF-8 is different from proper UTF-8 encoding. That’s the case because UTF-8 it doesn’t offer full Unicode support, which can lead to data loss or even security issues. UTF-8’s failure to fully support Unicode is the real kicker – the UTF-8 encoding needs up to four bytes per character, while the “utf8” encoding offered by MySQL only supports three. See the issue on that front? In other words, if we want to store smilies represented like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2021%2F10%2Fimage-44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2021%2F10%2Fimage-44.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We cannot do it – it’s not that MySQL will store it in a format of “???” or similar, but it won’t store it altogether and will respond with an error message like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Incorrect string value: ‘\x77\xD0’ for column ‘demo_column’ at row 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this error message, MySQL is saying “well, I don’t recognize the characters that this smiley is made out of. Sorry, nothing I can do here” – at this point, you might be wondering what is being done to overcome such a problem? Is MySQL even aware of its existence? Indeed, it would be a lie to say that MySQL is not aware of this issue – rather, they are, but the MySQL developers never got around to fixing it. Instead, they released a workaround more than a decade ago along with MySQL 5.5.3.&lt;/p&gt;

&lt;p&gt;That workaround is called “utf8mb4”. utf8mb4 is pretty much the same as its older counterpart – utf8 – it’s just that the encoding uses one to four bytes per character which essentially means that it’s able to support a wider variety of symbols and characters.&lt;/p&gt;

&lt;p&gt;Use MySQL 8.0, work with data a little, and you will quickly notice that indeed, utf8mb4 is the default character set available in MySQL – moreover, it is speculated that in the near future utf8mb4 will become a reference to the utf8 standard in MySQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flavors of utf8mb4
&lt;/h2&gt;

&lt;p&gt;As time goes by and utf8 is being outpaced by utf8mb4 in almost all fronts, it’s natural that there are a couple of variations of collations that can be used. Essentially, these collations act as sort of a “set” of sorting rules that are designed to better fit specific data sets. utf8mb4 has a couple as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;utf8mb4_general_ci&lt;/code&gt; is geared towards a more “general” use of MySQL and utf8. This character set is widely regarded to take “shortcuts” towards data storage which may result in sorting errors in some cases to improve speed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;utf8mb4_unicode_ci&lt;/code&gt; is geared towards “advanced” users – that is, it’s a set of collations that is based on Unicode and we can rest assured that our data will be dealt with properly if this collation is in use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this case, do note the “&lt;code&gt;_ci&lt;/code&gt;” ending towards the collation: that stands for “case insensitive.” Case insensitivity is related to sorting and comparison.&lt;/p&gt;

&lt;p&gt;These two “flavors” of utf8mb4 are used more and more – as newer versions of MySQL are also being released, we can also see that the &lt;code&gt;utf8mb4_unicode_ci&lt;/code&gt; collation is the collation of choice for most people working with MySQL today. One fact is for certain – not all people using MySQL in this way know the functionalities and the upsides presented by utf8mb4 as opposed to its counterpart utf8, but they will certainly see a difference when they import data having unusual characters! Did we convince you to head over to the realm of utf8mb4 yet?&lt;/p&gt;

&lt;h2&gt;
  
  
  Properly Working with utf8mb4-based Data
&lt;/h2&gt;

&lt;p&gt;Here’s how some developers go about creating databases and tables based on utf8mb4:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2022%2F08%2Fimage-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2022%2F08%2Fimage-6.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Except that this query gives us an error (below the query) which is a frequent point of confusion to beginners and advanced developers alike – MySQL is essentially saying that when we use a collation based on utf8mb4, we should also use a compatible character set, and in this case, latin1 isn’t valid, so whatever you do, keep these points in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;utf8mb4 is not the real utf8 in MySQL and its flavors (MariaDB and Percona Server): utf8 only supports 3 bytes of data, utf8mb4 supports 4 which is what utf8 should do in the first place. If utf8 is in use, some characters may not be displayed properly.&lt;/li&gt;
&lt;li&gt;When we elect to use utf8mb4 instead of utf8 in MySQL, we should also make sure that we use an appropriate character set (utf8mb4.) Note the success message underneath the query:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2022%2F08%2Fimage-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2022%2F08%2Fimage-7.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are good to go – we can store all kinds of characters inside of our database and not have MySQL error out with an “Incorrect string value” error! Woohoo!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;UTF-8 in MySQL is broken – it is not able to support four bytes per character as UTF-8 is supposed to. “utf8mb4” can be used to solve this problem and it’s generally pretty easy to work with – simply choose a specific collation (in this case, choose either general if you’re using MySQL for a personal project or a small website, or a Unicode collation or if you’re using it for something more technical or if you want to push MySQL to its limits.)&lt;/p&gt;

&lt;p&gt;Before pushing MySQL to its limits, though, be aware that aside from &lt;a href="https://dev.mysql.com/doc/" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt;, there also are a couple of blogs like the one provided by &lt;a href="https://dev.to__GHOST_URL__/"&gt;Arctype&lt;/a&gt; that provide information on how to work with the relational database management system and all of its flavors. We cover how to work with &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;MySQL and big data&lt;/a&gt;, &lt;a href="https://dev.to__GHOST_URL__/mysql-schema/"&gt;how to optimize your database schemas&lt;/a&gt;, and so much more! If you are into databases, make sure to keep an eye out on the &lt;a href="https://arctype.com/blog" rel="noopener noreferrer"&gt;Arctype blog&lt;/a&gt; and we will see you in the next one!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>productivity</category>
      <category>mysql</category>
    </item>
    <item>
      <title>Building REST Endpoints with Knex and PostgreSQL</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Tue, 09 Aug 2022 21:32:00 +0000</pubDate>
      <link>https://dev.to/arctype/building-rest-endpoints-with-knex-and-postgresql-18h2</link>
      <guid>https://dev.to/arctype/building-rest-endpoints-with-knex-and-postgresql-18h2</guid>
      <description>&lt;h2&gt;
  
  
  What is Knex and PostgreSQL?
&lt;/h2&gt;

&lt;p&gt;Knex is a versatile, portable, and enjoyable SQL query builder for PostgreSQL, CockroachDB, MSSQL, MySQL, MariaDB, SQLite3, Better-SQLite3, Oracle, and Amazon Redshift, while PostgreSQL is an open-source object-relational database management system with a high degree of flexibility. It is capable of handling a wide range of use cases, including single machines, data warehouses, and web services with multiple concurrent users. It is a relational database management system that uses and extends SQL (hence the name), and it is broadly extensible to a variety of use cases beyond transactional data.&lt;/p&gt;

&lt;p&gt;PostgreSQL stores information in tables (called relations) that contain tuples that represent entities (like documents and people) and relationships (such as authorship). Fixed-type attributes that represent entity properties (such as a title), as well as a primary key, are included in relationships. Attribute types can be atomic (for example, integer, floating point, or boolean) or structured (such as an array, nested JSON, or a procedure).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Knex?
&lt;/h2&gt;

&lt;p&gt;Most developers use Knex as their query builder for the following reasons.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It allows them to create queries like they are writing Javascript code, while it handles the translation to SQL.&lt;/li&gt;
&lt;li&gt;It has support for database management systems like PostgreSQL, MySQL, SQLite, and Oracle.&lt;/li&gt;
&lt;li&gt;It supports both traditional node-style callbacks and a promise interface for cleaner async flow control, as well as a stream interface.&lt;/li&gt;
&lt;li&gt;It is full-featured query and schema builders, transaction support (with savepoints), connection pooling, and standardized responses between query clients and dialects.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;As this is a hands-on demonstration tutorial, to get started, ensure your systems meet the following requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have Node.js version 14 or later installed&lt;/li&gt;
&lt;li&gt;You have installed and set up &lt;a href="https://arctype.com/"&gt;Arctype&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You have set up a PostgreSQL database&lt;/li&gt;
&lt;li&gt;Install the Knex CLI (the command to do so is &lt;code&gt;npm i -g knex&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the above requirements met, let’s create a new database using Arctype. To get started launch &lt;a href="https://arctype.com"&gt;the Arctype client&lt;/a&gt;, then choose the database you would like to work with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0s7CXmjc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0s7CXmjc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-3.png" alt="" width="880" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then provide the credentials of the database in question. It's all really simple to do, no hassle here!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TifxRl1I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TifxRl1I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-4.png" alt="" width="880" height="799"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you already have a database set up, you can always create a new one by adding a new data source:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zjsmyDVy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zjsmyDVy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2022/08/image-5.png" alt="" width="512" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you're done, you should see tables underneath your database on the left hand side in Arctype.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Node.js Server
&lt;/h2&gt;

&lt;p&gt;Now, create a new folder for your project and initialize a new project with the commands below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir knex-demo &amp;amp;&amp;amp; cd knex-demo
npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then install the required packages by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install pg express knex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the above command, you have installed the PostgreSQL Node.js module, &lt;code&gt;express&lt;/code&gt;, and the &lt;code&gt;knex&lt;/code&gt; module.&lt;/p&gt;

&lt;p&gt;Now create the following folder structure below in the knex-demo folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lsVMu7qr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9h8e4tac20ixa4k74xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lsVMu7qr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9h8e4tac20ixa4k74xc.png" alt="Image description" width="492" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in the &lt;code&gt;app.js&lt;/code&gt; file, create a Node.js Express server with the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");   

const app = express();

app.use(express.json());

app.listen(3000, () =&amp;gt; {
  console.log("Server is running on port 3000");
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, modify the &lt;code&gt;package.json&lt;/code&gt; file to add the script command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; "scripts": {
    "start": "node src/app.js"
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup Knex
&lt;/h2&gt;

&lt;p&gt;Once you’ve created the database, let’s set up Knex and connect to the database. To get started, run the command below on your terminal to initialize knex.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knex init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will create a &lt;code&gt;knexfile.js&lt;/code&gt; file in your project root directory with the code snippets to connect your database to different environments (development, staging, production.) By default, it uses the SQLite database in the development environment, you’ll need to modify the code to use your Postgres database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Update with your config settings.

/**
 * @type { Object.&amp;lt;string, import("knex").Knex.Config&amp;gt; }
 */
module.exports = {
  development: {
    client: "postgresql",
    connection: {
      database: "blogs",
      user: "postgres",
      password: "1234",
    },
  },

  staging: {
    client: "postgresql",
    connection: {
      database: "&amp;lt;Your Staging DB&amp;gt;",
      user: "username",
      password: "password",
    },
    pool: {
      min: 2,
      max: 10,
    },
    migrations: {
      tableName: "knex_migrations",
    },
  },

  production: {
    client: "postgresql",
    connection: {
      database: "&amp;lt;Your Production DB&amp;gt;",
      user: "username",
      password: "password",
    },
    pool: {
      min: 2,
      max: 10,
    },
    migrations: {
      tableName: "knex_migrations",
    },
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your &lt;code&gt;knexfile.js&lt;/code&gt; file should look like the code snippet above. You can modify the code to meet any other project requirements you might have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Migration File
&lt;/h2&gt;

&lt;p&gt;Now run the command below to create your migration file for the user’s table and define how the table will look like by running the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knex migrate:make users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will create a &lt;code&gt;migrations/timestamp_users&lt;/code&gt; file in the root directory of your project. Now let’s define the schema in the ups and down functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/**
 * @param { import("knex").Knex } knex
 * @returns { Promise&amp;lt;void&amp;gt; }
 */
exports.up = function (knex) {
  return knex.schema.createTable("users", (table) =&amp;gt; {
    table.increments("id").primary();
    table.string("name").notNullable();
    table.string("email").notNullable();
    table.string("password").notNullable();
    table.string("avatar").defaultTo("https://i.imgur.com/Xq2bZCY.png");
    table.string("bio").defaultTo("I am a new user");
    table.timestamp("created_at").defaultTo(knex.fn.now());
    table.timestamp("updated_at").defaultTo(knex.fn.now());
  });
};

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise&amp;lt;void&amp;gt; }
 */
exports.down = function (knex) {
  return knex.schema.dropTable("users");
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code in the &lt;code&gt;migrations/timestamp_users.js&lt;/code&gt; file should look like the above code snippet. We defined the user’s schema. The first one is an &lt;code&gt;id&lt;/code&gt; field with autoincrement set to true and a unique constraint, after that we have the fields we need for the user’s table.&lt;/p&gt;

&lt;p&gt;Then in the &lt;code&gt;down&lt;/code&gt; function, we are dropping any existing table with the name &lt;strong&gt;users&lt;/strong&gt; before we create our new table.&lt;/p&gt;

&lt;p&gt;To create this table in your database you need to run the migrations command again, this time you need to add the &lt;code&gt;latest&lt;/code&gt; flag to only commit the new changes in the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knex migrate:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create the &lt;strong&gt;blogs&lt;/strong&gt; schema by running the migration command below on your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knex migrate:make blogs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add the code below in the &lt;strong&gt;up&lt;/strong&gt; function to define the blog’s schema.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/**
 * @param { import("knex").Knex } knex
 * @returns { Promise&amp;lt;void&amp;gt; }
 */
exports.up = function (knex) {
  return knex.schema.createTable("blog", (table) =&amp;gt; {
    table.increments("id").primary();
    table.string("title").notNullable();
    table.string("content").notNullable();
    table.string("image").notNullable();
    table.timestamp("created_at").defaultTo(knex.fn.now());
    table.timestamp("updated_at").defaultTo(knex.fn.now());
  });
};

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise&amp;lt;void&amp;gt; }
 */
exports.down = function (knex) {
    return knex.schema.dropTable("blog");
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, we created a blog schema and defined the fields we need in the blogs table. We are also dropping any existing table with the name &lt;strong&gt;blogs&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Table Relationships
&lt;/h2&gt;

&lt;p&gt;Now let’s create a relationship between the user’s schema and the blog’s schema. This way we can associate blogs with the users that create them. To do that, we need to update the code in the &lt;code&gt;timestamps_blogs.js&lt;/code&gt; file with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/**
 * @param { import("knex").Knex } knex
 * @returns { Promise&amp;lt;void&amp;gt; }
 */
exports.up = function (knex) {
  return knex.schema.createTable("blog", (table) =&amp;gt; {
    table.increments("id").primary();
    table.string("title").notNullable();
    table.string("content").notNullable();
    table.string("image").notNullable();
    table
      .integer("author")
      .unsigned()
      .references("id")
      .inTable("users")
      .onDelete("CASCADE");
    table.timestamp("created_at").defaultTo(knex.fn.now());
    table.timestamp("updated_at").defaultTo(knex.fn.now());
  });
};

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise&amp;lt;void&amp;gt; }
 */
exports.down = function (knex) {
    return knex.schema.dropTable("blog");
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, we modified the author field to reference the id of each user in the user’s schema. The &lt;code&gt;OnDelete&lt;/code&gt; will ensures the blog is deleted once the user’s account is deleted.&lt;/p&gt;

&lt;p&gt;Next, create a &lt;code&gt;db.js&lt;/code&gt; file in the config folder and add the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const knex = require("knex");
const config = require("../../knexfile");
module.exports = knex(config.development);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, we are importing the &lt;code&gt;config&lt;/code&gt; from &lt;code&gt;knexfile&lt;/code&gt; and initializing the knex object, so since running the application on the development environment we’ll be calling the &lt;code&gt;development&lt;/code&gt; config object.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Service
&lt;/h2&gt;

&lt;p&gt;With our database tables created, let’s create a service that performs CRUD operations in the database tables. Create a &lt;code&gt;userService.js&lt;/code&gt; file in the &lt;code&gt;service&lt;/code&gt; folder and add the code snippet specified below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const db = require('../config/db');

module.exports = userService = {
  getAll: async () =&amp;gt; {
    const users = await db("users");
    return users;
  },
  getById: async (id) =&amp;gt; {
    const user = await db("users").where("id", id);
    return user;
  },
  create: async (user) =&amp;gt; {
    const users = await db("users").insert(user);
    return users;
  },
  update: async (id, user) =&amp;gt; {
    const users = await db("users").where("id", id).update({
      name: user.name,
      email: user.email,
      password: user.password,
      avatar: user.avatar,
      bio: user.bio,
    });
    return users;
  },
  delete: async (id) =&amp;gt; {
    const users = await db("users").where("id", id).del();
    return users;
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, we imported the knex config. Then we created &lt;code&gt;userService&lt;/code&gt; object and created the methods for the CRUD operations.&lt;/p&gt;

&lt;p&gt;Next, create a &lt;code&gt;blogService.js&lt;/code&gt; file in the &lt;strong&gt;service&lt;/strong&gt; folder and add the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const db = require("../config/db");

module.exports = blogService = {
  getAll: async () =&amp;gt; {
    const blogs = await db("blog")
      .join("users", "users.id", "blog.author")
      .select(
        "blog.*",
        "users.name",
        "users.avatar",
        "users.bio",
        "users.email"
      );
    return blogs;
  },
  getById: async (id) =&amp;gt; {
    console.log(id);
    const blog = await db("blog").where({ id });
    return blog;
  },
  create: async (blog) =&amp;gt; {
    const blogs = await db("blog").insert(blog);
    return blogs;
  },
  update: async (id, blog) =&amp;gt; {
    const blogs = await db("blog").where("id", id).update({
      title: blog.title,
      content: blog.content,
      image: blog.image,
    });
    return blogs;
  },
  delete: async (id) =&amp;gt; {
    const blogs = await db("blogs").where("id", id).del();
    return blogs;
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, we created the CRUD operations for the service blogService. In the &lt;code&gt;getAll&lt;/code&gt; method we are joining the &lt;code&gt;users&lt;/code&gt; table with the &lt;code&gt;blogs&lt;/code&gt; table, using the select method to &lt;code&gt;select&lt;/code&gt; the fields we want to show to the users - if we now call the service we can get the blogs and the users that posted them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Controller
&lt;/h2&gt;

&lt;p&gt;Now let’s create the controller to consume the service we just created. Let’s start with the user’s controller. So create a &lt;code&gt;userController.js&lt;/code&gt; file in the &lt;strong&gt;controller&lt;/strong&gt; folder and add the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const userService = require("../service/userService");
module.exports = userController = {
  getAll: async (req, res, next) =&amp;gt; {
    try {
      const users = await userService.getAll();
      res.json(users);
    } catch (error) {
      next(error);
    }
  },
  getById: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.getById(req.params.id);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
  create: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.create(req.body);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
  update: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.update(req.params.id, req.body);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
  delete: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.delete(req.params.id);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now create a &lt;code&gt;blogController.js&lt;/code&gt; file in the &lt;strong&gt;controller&lt;/strong&gt; folder to consume the &lt;code&gt;blogService&lt;/code&gt; with the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const userService = require("../service/userService");
module.exports = userController = {
  getAll: async (req, res, next) =&amp;gt; {
    try {
      const users = await userService.getAll();
      res.json(users);
    } catch (error) {
      next(error);
    }
  },
  getById: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.getById(req.params.id);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
  create: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.create(req.body);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
  update: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.update(req.params.id, req.body);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
  delete: async (req, res, next) =&amp;gt; {
    try {
      const user = await userService.delete(req.params.id);
      res.json(user);
    } catch (error) {
      next(error);
    }
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the API Routes
&lt;/h2&gt;

&lt;p&gt;Next, let’s create the API routes for the &lt;strong&gt;controllers&lt;/strong&gt;. To get started, create a &lt;code&gt;user.js&lt;/code&gt; file in the &lt;code&gt;routes&lt;/code&gt; folder and add the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");
const router = express.Router();
const userController = require("../controller/userController");

/* GET users listing. */
router.route("/").get(userController.getAll).post(userController.create);
router
  .route("/:id")
  .get(userController.getById)
  .put(userController.update)
  .delete(userController.delete);

module.exports = router;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, we have imported the &lt;code&gt;userController&lt;/code&gt; and created an express router. Using the express router we define the route handlers for the controllers.&lt;/p&gt;

&lt;p&gt;Now create another file called &lt;code&gt;blog.js&lt;/code&gt; in the routes folder to define the route handlers for the blog controller with the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");
const router = express.Router();
const blogController = require("../controller/blogController");

/* GET home page. */
router.route("/").get(blogController.getAll).post(blogController.create);
router
  .route("/:id")
  .get(blogController.getById)
  .put(blogController.update)
  .delete(blogController.delete);

module.exports = router;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, import the routes in the app.js file and create a middleware for both routes with the code snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
const userRouter = require("./routes/users");
const blogRouter = require("./routes/blog"); 

...
app.use('/users', userRouter);
app.use('/blog', blogRouter);\

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the API
&lt;/h2&gt;

&lt;p&gt;Now let’s test the project to ensure everything works as expected. First, start y0ur server with the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then launch Postman or any API testing tool of your choosing.&lt;/p&gt;

&lt;h3&gt;
  
  
  The User Endpoint
&lt;/h3&gt;

&lt;p&gt;Send a &lt;strong&gt;POST&lt;/strong&gt; request to the endpoint &lt;code&gt;localhost:3000/users&lt;/code&gt; with the payload below to create a user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name":"name",
  "email":"name@gmail.com",
  "password":"1234",
  "bio":"I am a software dev."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1gGcrVnX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/HENQX83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1gGcrVnX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/HENQX83.png" alt="" width="880" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, send a &lt;strong&gt;GET&lt;/strong&gt; request to the same endpoint to get all the registered users. Go ahead and test out the other user’s endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QYGHAMCm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/UDgOx40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QYGHAMCm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/UDgOx40.png" alt="" width="880" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Blog Endpoint
&lt;/h3&gt;

&lt;p&gt;Now send a &lt;strong&gt;POST&lt;/strong&gt; request to the endpoint &lt;code&gt;localhost:3000/blog&lt;/code&gt; with the payload below to create a blog for the user with &lt;strong&gt;id&lt;/strong&gt; of 1 denoted by the &lt;strong&gt;author&lt;/strong&gt; field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "title":"My First Blog",
    "content":"Blah Blah Blah",
    "image":"Image URL",
    "author":"1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LNhOZTOX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/VtV93ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LNhOZTOX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/VtV93ew.png" alt="" width="880" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then send a &lt;strong&gt;GET&lt;/strong&gt; request to the same endpoint to get all the blogs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Sm8mgKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/ILjFLZ4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Sm8mgKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/ILjFLZ4.png" alt="" width="880" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Viewing User Data with Arctype
&lt;/h2&gt;

&lt;p&gt;We have now successfully created our Blog application. Now, let’s look at the users’ data with Arctype. To begin, launch Arctype, click the &lt;code&gt;Postgres&lt;/code&gt; tab, and enter the following &lt;code&gt;Postgres&lt;/code&gt; credentials, as shown in the screenshot below (it's all the same as we did with MySQL at the start):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vTxAgeBV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/VBgbdw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vTxAgeBV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/VBgbdw0.png" alt="" width="880" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see the &lt;strong&gt;user&lt;/strong&gt;, &lt;strong&gt;blog&lt;/strong&gt; table and the knex migrations tables keeping record of the migrations made in the application. Now click on the blogs table to show the user’s blogs, as shown in the screenshot below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OSpsWl4s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/gW8awfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OSpsWl4s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/gW8awfm.png" alt="" width="880" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By building a demo project, we’ve learned how to build REST endpoints with Knex and PostgreSQL. We started by introducing PostgreSQL and Knex and why you should use them, then we created a blog project for the demonstration. Now that you’ve gotten the knowledge you seek, how would you use a query builder in your next project? Consider learning more about &lt;a href="http://knexjs.org/"&gt;Knex&lt;/a&gt; from their official website and take things even further!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>postgres</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Top 5 Google Cloud Database Services - Part 1 (SQL)</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Thu, 04 Aug 2022 22:17:55 +0000</pubDate>
      <link>https://dev.to/arctype/top-5-google-cloud-database-services-part-1-sql-1568</link>
      <guid>https://dev.to/arctype/top-5-google-cloud-database-services-part-1-sql-1568</guid>
      <description>&lt;p&gt;Google Cloud Platform offers various cloud database tools and services based on what a company needs to accomplish with them. Thus, the different tools have different purposes. &lt;/p&gt;

&lt;p&gt;In this article, we’ll be looking at the top 5 Google Cloud Database Services/Tools that support SQL in one form or another. I’ll take you through each of their features, benefits, and provide some code snippets. Above all, I’ll briefly explain when you can use them. Here is what we will cover:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database Tool&lt;/th&gt;
&lt;th&gt;Support&lt;/th&gt;
&lt;th&gt;Use Cases&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cloud SQL&lt;/td&gt;
&lt;td&gt;MySQL, PostgreSQL, SQL Server&lt;/td&gt;
&lt;td&gt;CRM ERP e-Commerce SaaS Apps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Spanner&lt;/td&gt;
&lt;td&gt;Supports two SQL dialects: Google Standard SQL and PostgreSQL.&lt;/td&gt;
&lt;td&gt;Used for critical high-load transactions, for SQL and DML support, also used together with JDBC.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AlloyDB&lt;/td&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;td&gt;Enterprise workloads that require high transaction throughput, large data sizes, or multiple read replicas.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bare Metal Solution for Oracle&lt;/td&gt;
&lt;td&gt;Shift Workloads of Oracle to Google Cloud&lt;/td&gt;
&lt;td&gt;Creating new applications and microservices using Google services that connect to an Oracle database. More use cases can be found &lt;a href="https://cloud.google.com/bare-metal"&gt;here.&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BigQuery&lt;/td&gt;
&lt;td&gt;Serverless, Multi-Cloud Data Warehouse&lt;/td&gt;
&lt;td&gt;Multi-Cloud Analytics Real-time Processing Built-in ML&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Cloud SQL
&lt;/h2&gt;

&lt;p&gt;Cloud SQL is a tool that enables you to manage all your relational databases, supports MySQL and PostgreSQL, and also offers support for SQL Server. The tool eliminates the need for manual management and has robust features that make your process hassle-free. &lt;/p&gt;

&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cloud SQL automates all major DBMS-oriented processes such as cloud backups, replication, and encryption patches. It also ensures that your database scales automatically so there’s no downtime and you can perform all your tasks without any form of interruption.&lt;/li&gt;
&lt;li&gt;You can easily connect your database systems with App Engine, Google Kubernetes Engine, and the Compute Engine based on your requirements and the platform also allows you to work with on-premise systems and data.&lt;/li&gt;
&lt;li&gt;The platform is compliant with major standards such as ISO 27001, SSAE 16, and PCI DSS. This way it will always be in sync with the industry. &lt;/li&gt;
&lt;li&gt;Besides cloud-backups, one can also automate the failover process and perform cloud migration with minimal downtime. Since it is serverless, various manual operations such as managing, monitoring, and provisioning aren’t required either. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Highly secure.&lt;/li&gt;
&lt;li&gt;Easily scalable.&lt;/li&gt;
&lt;li&gt;Hassle-free setup. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to Use
&lt;/h3&gt;

&lt;p&gt;Cloud SQL is used when the storage requirement is less than 10TB. It performs end-to-end relational database management for all your systems as long as it is within this limit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Code
&lt;/h3&gt;

&lt;p&gt;A nested query in Cloud SQL will look something like the code given below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT firstname, lastname, total_quantity
    FROM (SELECT buyerid, sum(qtysold) total_quantity
            FROM sales
            GROUP BY buyerid
            ORDER BY total_quantity desc limit 10) Q, users
    WHERE Q.buyerid = userid
    ORDER BY Q.total_quantity desc;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Cloud Spanner
&lt;/h2&gt;

&lt;p&gt;Cloud Spanner provides all the functions of Cloud SQL and goes even further by providing 99.999% availability at all times. Furthermore, it provides higher row consistency and the performance is much more enhanced. &lt;/p&gt;
&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Spanner enables the availability mentioned above across multiple regions and has much lesser downtime compared to four nines.&lt;/li&gt;
&lt;li&gt;The performance is elevated because it shares the data based on the load and size. This way scaling becomes more efficient and requires lesser human intervention.&lt;/li&gt;
&lt;li&gt;Data is consistent across multiple regions. This means no matter where the users are from and how many users are simultaneously working on the database, the changes are up-to-date and consistent.&lt;/li&gt;
&lt;li&gt;Once you choose a granular instance, you don’t need to re-architecture or worry about scaling it. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Automated scaling without any limits.&lt;/li&gt;
&lt;li&gt;Available across the world, at any time.&lt;/li&gt;
&lt;li&gt;Simple experience and better performance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  When to Use
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, Cloud Spanner provides better performance and higher availability than Cloud SQL. Furthermore, it has no storage limit like Cloud SQL. So if your infrastructure requires all this and more, you should use Cloud Spanner. &lt;/p&gt;
&lt;h3&gt;
  
  
  Sample Code
&lt;/h3&gt;

&lt;p&gt;Here instead of selecting or retrieving data from the table, you can emulate a temporary table name using the &lt;code&gt;WITH&lt;/code&gt; clause. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH Roster AS
 (SELECT 'Adams' as LastName, 50 as SchoolID UNION ALL
  SELECT 'Buchanan', 52 UNION ALL
  SELECT 'Coolidge', 52 UNION ALL
  SELECT 'Davis', 51 UNION ALL
  SELECT 'Eisenhower', 77)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command would create this table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Last Name&lt;/th&gt;
&lt;th&gt;SchoolID&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Adams&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Buchanan&lt;/td&gt;
&lt;td&gt;52&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coolidge&lt;/td&gt;
&lt;td&gt;52&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Davis&lt;/td&gt;
&lt;td&gt;51&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Eisenhower&lt;/td&gt;
&lt;td&gt;77&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  AlloyDB for PostgreSQL
&lt;/h2&gt;

&lt;p&gt;AlloyDB is one of the latest tools launched by Google: the tool sets itself apart from Cloud SQL and Cloud Spanner. This is because the latter two are compatible with PostgreSQL, whereas AlloyDB is a standard PostgreSQL database at its core. &lt;/p&gt;

&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AlloyDB is 4x faster for transactional workloads and 10x faster for queries compared to the standard PostgreSQL. In fact, it is twice as fast as Amazon’s Aurora PostgreSQL for transactional workloads. &lt;/li&gt;
&lt;li&gt;Like Cloud Spanner, it has 99.999% availability across all regions.&lt;/li&gt;
&lt;li&gt;The tool also allows moving legacy databases to their cloud with ease. This way it tries to make its infrastructure more open-source and eliminates issues such as licensing and other boundaries. &lt;/li&gt;
&lt;li&gt;Furthermore, with Cloud SQL and Cloud Spanner, one has to integrate BigQuery for better analytics. With AlloyDB users won’t have to do the same as the functionalities are in-built and readily available. &lt;/li&gt;
&lt;li&gt;It also has built-in Vertex AI, which enables users to leverage AI and machine learning models for analytics. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The tool showcases elevated performance and efficiency, which is better than Cloud Spanner, Cloud SQL, and Amazon’s Aurora PostgreSQL.&lt;/li&gt;
&lt;li&gt;A fit for AI and ML-powered systems.&lt;/li&gt;
&lt;li&gt;Pricing is transparent and predictable without charges for licensing or opaque I/O. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to Use
&lt;/h3&gt;

&lt;p&gt;If your organization and infrastructure needs a more open-sourced way of functioning with maximum performance, efficiency, and features, then AlloyDB is your go-to Google tool. &lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Code
&lt;/h3&gt;

&lt;p&gt;Since AlloyDB is a PostgreSQL-compatible tool, the sample code is of the same query language. The query below will help you to insert values into a table. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# insert into dummy_table values('XYZ','location-A',25);
        INSERT 0 1
        postgres=# insert into dummy_table values('ABC','location-B',35);
        INSERT 0 1
        postgres=# insert into dummy_table values('DEF','location-C',40);
        INSERT 0 1
        postgres=# insert into dummy_table values('PQR','location-D',54);
        INSERT 0 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Bare Metal Solution for Oracle
&lt;/h2&gt;

&lt;p&gt;A Bare Metal Solution is essential for organizations that have specialized workloads and involve the usage of highly sophisticated services, but find it difficult to leverage the usual cloud. Google provides certified equipment for these workloads and places it in the data centers that run the cloud services. &lt;/p&gt;

&lt;p&gt;This way organizations can move to the cloud, and use these high-intensive services that are integrated with the usual cloud services. &lt;/p&gt;
&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;With a minimal 2ms latency, organizations will be able to use cloud services.&lt;/li&gt;
&lt;li&gt;All major Oracle capabilities such as database clustering, replication, and more. &lt;/li&gt;
&lt;li&gt;The tool acts as a bridge between on-premises legacy systems and the cloud.&lt;/li&gt;
&lt;li&gt;It also enables integration with services such as Ansible Based Toolkit and Kubernetes Operator for Oracle. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Organizations can move their legacy-based infrastructure to the cloud without much hassle.&lt;/li&gt;
&lt;li&gt;There’s minimal latency while using various services.&lt;/li&gt;
&lt;li&gt;Allows access to all Oracle capabilities such as RAC and RMAN. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  When to Use
&lt;/h3&gt;

&lt;p&gt;This solution is mainly applicable to organizations with infrastructures that are rooted in Oracle. It enables them to modernize their whole setup by moving it to the cloud. It would also help companies to avoid being locked in and enable them to use functionalities from different vendors. &lt;/p&gt;
&lt;h3&gt;
  
  
  Sample Code
&lt;/h3&gt;

&lt;p&gt;Since databases are connected with GCP database tools you can use any query language depending on the database tool. Here I’ll share a sample code for those running MySQL. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE shop (
    article INT UNSIGNED  DEFAULT '0000' NOT NULL,
    dealer  CHAR(20)      DEFAULT ''     NOT NULL,
    price   DECIMAL(16,2) DEFAULT '0.00' NOT NULL,
    PRIMARY KEY(article, dealer));
INSERT INTO shop VALUES
    (1,'A',3.45),(1,'B',3.99),(2,'A',10.99),(3,'B',1.45),
    (3,'C',1.69),(3,'D',1.25),(4,'D',19.95);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  BigQuery
&lt;/h2&gt;

&lt;p&gt;BigQuery was launched to handle analytics dealing with millions of rows. As mentioned in the previous section it is mainly used with Cloud SQL and Cloud Spanner for the same purpose, since they have little-to-no analytical functionalities for such a scale. &lt;/p&gt;
&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The tool enables natural language processing with the help of integrations such as Data QnA and Analyza. This allows people to work with data from chatbots, spreadsheets, and other custom-built UIs.&lt;/li&gt;
&lt;li&gt;Like AlloyDB, even BigQuery integrates Vertex AI. It also has TensorFlow integrated and with these two, organizations can create custom AI/ML models of extensive complexity with just SQL.&lt;/li&gt;
&lt;li&gt;BigQuery is one of the fundamental elements of Business Intelligence solutions, that provides operations such as transformation, analysis, visualization, and reporting.&lt;/li&gt;
&lt;li&gt;The tool provides real-time data capturing, analytics, and replication, which helps organizations make quicker decisions, thus increasing their performance and efficiency. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besides all these, there are other robust features that help companies work with petabytes of structured and unstructured data, allowing geospatial analytics, fine-grained data governance, and security on a row-based and column-based level. &lt;/p&gt;
&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;All data analytics operations reside in one place.&lt;/li&gt;
&lt;li&gt;Ability to handle intensive data capturing and analysis for large-scale databases.&lt;/li&gt;
&lt;li&gt;Real-time operations across different verticals.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  When to Use
&lt;/h3&gt;

&lt;p&gt;The tool can be used by large organizations that require real-time operations on petabytes of data. Its connectivity with other Google Cloud Database solutions such as CloudSQL and Cloud Spanner enables all kinds of operations from the most basic to the most sophisticated ones. &lt;/p&gt;

&lt;p&gt;Sample code can be found below:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH locations AS
  (SELECT STRUCT("Seattle" AS city, "Washington" AS state) AS location
  UNION ALL
  SELECT STRUCT("Phoenix" AS city, "Arizona" AS state) AS location)
SELECT l.location.*
FROM locations l;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this case, we’ve used array structures in a nested query. While this is merely a sample, with BigQuery, you perform much more complex operations and hence the queries get complex too. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These are some of the most popular Google Cloud Database tools that organizations use for their infrastructure. The code samples explain one simple task and you can explore further by looking at the official Google Documents and Github. There are more SQL tools offered by Google, which we’ll be covering in the second part of this article.  &lt;/p&gt;

</description>
      <category>programming</category>
      <category>sql</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Handling File Uploads with NestJS and MySQL</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Tue, 02 Aug 2022 22:17:32 +0000</pubDate>
      <link>https://dev.to/arctype/handling-file-uploads-with-nestjs-and-mysql-456g</link>
      <guid>https://dev.to/arctype/handling-file-uploads-with-nestjs-and-mysql-456g</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Many developers despise dealing with file uploads. This can be attributed to a lack of knowledge about the best approach to take or difficulties determining how to configure their Nest.js application to handle file uploads. Many people may want to save their files directly to a MySQL database, or save image names and have the image saved on disk storage: it all depends on their preferences and the goals they want to achieve. This tutorial will teach you how to build a file uploading functionality using Nestjs and MySQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin following this tutorial, ensure your system meets the following requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your system is running &lt;a href="https://nodejs.org/en/download/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; with version 14 or later.&lt;/li&gt;
&lt;li&gt;Your system has a &lt;a href="https://www.mysql.com/downloads/" rel="noopener noreferrer"&gt;MySQL database&lt;/a&gt; installed.&lt;/li&gt;
&lt;li&gt;You have installed &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up NestJS
&lt;/h2&gt;

&lt;p&gt;Once the above-mentioned requirements are met, proceed to install the Nestjs CLI and create a new project by running the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm i -g @nestjs/cli
$ nest new file-upload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These commands will install the Nestjs CLI and create a new Nestjs project with the folder structure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1659478229482%2F2JLaklpjN.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1659478229482%2F2JLaklpjN.png%2520align%3D"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the Nestjs project has been created, move on to the next step - install the required dependencies for your application by running the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --save @nestjs/typeorm typeorm mysql2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above command, you’ve installed the &lt;a href="https://typeorm.io/" rel="noopener noreferrer"&gt;TypeORM&lt;/a&gt; and &lt;a href="https://github.com/brianmario/mysql2" rel="noopener noreferrer"&gt;mysql2&lt;/a&gt; modules: they will enable you to connect your application to a MySQL database and perform operations on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup the MySQL Database
&lt;/h2&gt;

&lt;p&gt;With the above dependencies installed, proceed to set up and connect to your MySQL database. To get started, add the code in the &lt;code&gt;app.module.ts&lt;/code&gt; file with the code snippet below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
import { TypeOrmModule } from '@nestjs/typeorm';
import { Image } from './image.entity';

@Module({
  imports: [TypeOrmModule.forRoot({
    type: 'mysql',
    host: 'localhost',
    port: 3306,
    username: 'root',
    password: '1234',
    database: 'blog',
    entities: [Image],
    synchronize: true,
  }),
  TypeOrmModule.forFeature([Image])
  ],
  ...
})
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above code snippet, we imported &lt;code&gt;TypeOrmModule&lt;/code&gt; from the typeorm module we installed earlier. We used the &lt;code&gt;forRoot&lt;/code&gt; method to connect the application to a MySQL database and pass in the database credentials. Another thing to point out here is that &lt;code&gt;entities&lt;/code&gt; properties, which allowed us to specify the entities in our module and which will give us access to the &lt;code&gt;Image&lt;/code&gt; entity you’ll be creating shortly: we also have the &lt;code&gt;synchronize&lt;/code&gt; property set to &lt;code&gt;true&lt;/code&gt; to automatically migrate the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Image Entity
&lt;/h2&gt;

&lt;p&gt;Next, let’s create the Image entity we mentioned earlier. To get started, create a image.entity.ts file in the src directory and add the code snippet below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Entity, Column, PrimaryGeneratedColumn, CreateDateColumn, UpdateDateColumn } from 'typeorm';

@Entity()
export class Image {
    @PrimaryGeneratedColumn()
    id: number;

    @Column()
    name: string;

    @CreateDateColumn()
    dateCreated: Date;

    @UpdateDateColumn()
    dateUpdated: Date;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above code snippet, we imported the decorators we need to create an entity. Using these decorators we defined the properties of the entity. We have the &lt;code&gt;id&lt;/code&gt; field to generate random id’s for each record in the database using the &lt;code&gt;@PrimaryGeneratedColumn()&lt;/code&gt; decorator, the &lt;code&gt;name&lt;/code&gt; field to store the names of the images that will be uploaded using the &lt;code&gt;@Column&lt;/code&gt; decorator, the dateCreated and dateUpdate fields to save the date a record was created and updated using the &lt;code&gt;@CreateDateColumn()&lt;/code&gt; and &lt;code&gt;@UpdateDateColumn()&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Upload Service
&lt;/h2&gt;

&lt;p&gt;With the Image entity created, let’s create a service to perform the CRUD operations to handle the file uploads. In the &lt;code&gt;app.service.ts&lt;/code&gt; file, add the code snippet below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Injectable } from '@nestjs/common';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
import { Image } from './image.entity';

@Injectable()
export class AppService {
  constructor(
    @InjectRepository(Image)
    private readonly imageRepository: Repository&amp;lt;Image&amp;gt;,
  ) {}

  async getImages(): Promise&amp;lt;Image[]&amp;gt; {
    return this.imageRepository.find();
  }

  async createImage(image: Image): Promise&amp;lt;Image&amp;gt; {
    return this.imageRepository.save(image);
  }

  async getImage(id: number): Promise&amp;lt;Image&amp;gt; {
    return this.imageRepository.findOneBy({ id });
  }

  async deleteImage(id: number): Promise&amp;lt;void&amp;gt; {
    await this.imageRepository.delete(id);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above code snippet, we have imported the &lt;code&gt;injectRepository&lt;/code&gt; decorator to inject the &lt;code&gt;imageRepository&lt;/code&gt; into &lt;code&gt;AppService&lt;/code&gt; and the &lt;code&gt;Repository&lt;/code&gt; which provides you with the methods required to perform some operations on your database. So for the &lt;code&gt;createImage&lt;/code&gt; image service, we are saving the name of the image that is been uploaded which will be passed through the controller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Upload Controller
&lt;/h2&gt;

&lt;p&gt;Now let’s create the controllers to use the services. In the &lt;code&gt;app.controller.ts&lt;/code&gt; file and add the code snippet below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Injectable } from '@nestjs/common';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
import { Image } from './image.entity';

@Injectable()
export class AppService {
  constructor(
    @InjectRepository(Image)
    private readonly imageRepository: Repository&amp;lt;Image&amp;gt;,
  ) {}

  async getImages(): Promise&amp;lt;Image[]&amp;gt; {
    return this.imageRepository.find();
  }

  async createImage(image: Image): Promise&amp;lt;Image&amp;gt; {
    return this.imageRepository.save(image);
  }

  async getImage(id: number): Promise&amp;lt;Image&amp;gt; {
    return this.imageRepository.findOneBy({ id });
  }

  async deleteImage(id: number): Promise&amp;lt;void&amp;gt; {
    await this.imageRepository.delete(id);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above code snippet, we imported a couple of decorators like &lt;code&gt;FileInterceptor&lt;/code&gt;, &lt;code&gt;UploadedFile&lt;/code&gt;, and &lt;code&gt;UseInterceptors&lt;/code&gt;. The &lt;code&gt;FileInterceptor()&lt;/code&gt; interceptor to the route handler extracts the file from the request using the &lt;code&gt;@UploadedFile()&lt;/code&gt; decorator. The &lt;code&gt;FileInterceptor()&lt;/code&gt; decorator is exported from the &lt;code&gt;@nestjs/platform-express&lt;/code&gt; package. The &lt;code&gt;@UploadedFile()&lt;/code&gt; decorator is exported from &lt;code&gt;@nestjs/common&lt;/code&gt;. The &lt;code&gt;FileInterceptor()&lt;/code&gt; decorator takes two arguments, &lt;code&gt;fieldName&lt;/code&gt; which is the string that supplies the name of the field from the HTML form that holds a file, and the &lt;code&gt;options&lt;/code&gt; which is an optional object of type MulterOptions. This is the same object used by the multer constructor.&lt;/p&gt;

&lt;p&gt;Regarding the &lt;code&gt;createImage&lt;/code&gt; function, we have used the aforementioned decorators to handle the file upload using the &lt;code&gt;FileInterceptor()&lt;/code&gt; passing the field name for the image and we modified the &lt;code&gt;FileInterceptor()&lt;/code&gt; function to upload the image to disk by specifying the &lt;code&gt;storage&lt;/code&gt; property using the &lt;code&gt;diskStorage&lt;/code&gt; function available in &lt;code&gt;multer&lt;/code&gt;. Then we specified the location for the images and generated random names for the images. Also, we added a &lt;code&gt;filter&lt;/code&gt; property to restrict the upload of certain image formats. Now we get the file extracted using the &lt;code&gt;@UploadedFile()&lt;/code&gt; decorator and get the name and save it to the database. This way we can use the name of each image to get the image from the storage location.&lt;/p&gt;

&lt;p&gt;For the above code to work, you need to install multer by running the command below in your terminal:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i -D @types/multer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, you need to register the multer module in the array of imports in the &lt;code&gt;app.module.ts&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
import { MulterModule } from '@nestjs/platform-express';


@Module({
  ...
  MulterModule.register({
    dest: './files',
  }),],
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above configuration tells multer to handle the file upload and the location to upload the file to. Last but not least, we should create a &lt;code&gt;files&lt;/code&gt; folder in the &lt;code&gt;src&lt;/code&gt; directory to actually store the files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serving Files
&lt;/h2&gt;

&lt;p&gt;To actually serve the images uploaded on your application to the user, you need to install the &lt;code&gt;serve-static&lt;/code&gt; module by running the command below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --save @nestjs/serve-static
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, register the &lt;code&gt;ServeStaticModule&lt;/code&gt; in the array of imports in the &lt;code&gt;app.module.ts&lt;/code&gt; file with the code snippet below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
import { ServeStaticModule } from '@nestjs/serve-static';
import { join } from 'path';

@Module({
  ...
  ServeStaticModule.forRoot({
    rootPath: join(__dirname, '..', 'files')
  }),],
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above code snippet, you’ve specified the location where the files are located and can be served from.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the API
&lt;/h2&gt;

&lt;p&gt;Now open Postman and test the application by sending a POST request to the endpoint &lt;code&gt;localhost:4000/images&lt;/code&gt;, and pass in the payload in the request body as a form-data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2022%2F08%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farctype.com%2Fblog%2Fcontent%2Fimages%2F2022%2F08%2Fimage.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you now look at the files folder, you should see the file you have uploaded. Feel free to go ahead: test and play around with other routes as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Through this tutorial, you’ve learned how to handle file upload with NestJS and MySQL. You’ve learned how to connect to a MySQL database using TypeORM and you have also created an entity and uploaded images to the Nestjs application.&lt;/p&gt;

&lt;p&gt;You've successfully built a file upload feature with NestJS and MySQL: for the future of your database and application, though, keep in mind that keeping a constant eye on your applications and databases consists of much more than building features for them: &lt;a href="https://arctype.com" rel="noopener noreferrer"&gt;SQL clients like Arctype&lt;/a&gt; will let you write SQL queries and optimize them as well as visualize the data currently existing in your database, and &lt;a href="https://arctype.com/blog" rel="noopener noreferrer"&gt;the content existing on the Arctype blog&lt;/a&gt; will let you learn how to &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;optimize all of your MySQL instances for security, availability, and performance&lt;/a&gt; at ease as well as provide you with lots of insight in regards to the database world as a whole.&lt;/p&gt;

&lt;p&gt;For further reading, you can also read more about uploading files in &lt;a href="https://docs.nestjs.com/techniques/file-upload" rel="noopener noreferrer"&gt;Nestjs&lt;/a&gt;. For an extra challenge, try extending the application by protecting the delete and update routes. What will you build next?&lt;/p&gt;

</description>
      <category>programming</category>
      <category>nestjs</category>
      <category>mysql</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deriving Ideal Indexes For Your SQL Database: A Guide</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Thu, 28 Jul 2022 22:13:31 +0000</pubDate>
      <link>https://dev.to/arctype/deriving-ideal-indexes-for-your-sql-database-a-guide-1nem</link>
      <guid>https://dev.to/arctype/deriving-ideal-indexes-for-your-sql-database-a-guide-1nem</guid>
      <description>&lt;p&gt;Indexes are there to simplify our work when searching for data: they speed up &lt;code&gt;SELECT&lt;/code&gt; queries at the expense of slowing down other kinds of queries like &lt;code&gt;DELETE&lt;/code&gt;s, &lt;code&gt;UPDATE&lt;/code&gt;s, and &lt;code&gt;INSERT&lt;/code&gt;s instead. However, as awesome as indexes might be, they also need a lot of work to get right – in this blog, we will tell you how you should go about deriving ideal indexes for your database. The majority of the examples in this article will focus on MySQL: however, the concept is the same for all major database management systems available on the market today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Indexes?
&lt;/h2&gt;

&lt;p&gt;If you are familiar with database structures, great – because that’s essentially what indexes are! Indexes are database structures that can be used to quickly find rows having specific column values. At the expense of taking up disk space and time if your tables are big and you find yourself adding indexes on top of them, indexes allow databases to skip reading through entire tables and instead, only scan relevant rows which means that databases have less data to scan through.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Ideal Indexes?
&lt;/h2&gt;

&lt;p&gt;With that out of the way, what is an “ideal” index? We could define a perfect index like so – such an index is any index that is capable of helping us pursue our data reading goals without putting a toll on time, without occupying much disk space in return and not slowing down &lt;code&gt;INSERT&lt;/code&gt;s, &lt;code&gt;UPDATE&lt;/code&gt;s, and &lt;code&gt;DELETE&lt;/code&gt;s to a point where work is incomprehensible. In other words, an ideal index is any kind of index that allows us to speed up search operations, but at the same time doesn’t put a huge strain to maintain a database.&lt;/p&gt;

&lt;p&gt;Before choosing an “ideal” index for our database instance, we must decide what kind of an index type we are going to employ. For that, it is always useful to employ a questionnaire:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What kind of searches are we running? Do we deal with wildcards? A B-Tree index would do.&lt;/li&gt;
&lt;li&gt;Do we need exotic search operations (think queries involving wildcards and the like)? A &lt;code&gt;FULLTEXT&lt;/code&gt; index will do.&lt;/li&gt;
&lt;li&gt;Do we work with data that needs to be unique (think about data without duplicates)? We should head over to the &lt;code&gt;UNIQUE INDEX&lt;/code&gt; space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once we have chosen our index type, it’s type to derive an ideal index.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deriving Ideal Indexes
&lt;/h2&gt;

&lt;p&gt;To not bore you with theory, we are going to get straight to the point – how do you derive an ideal index? It must be different for every scenario, no?&lt;/p&gt;

&lt;p&gt;In most cases, the approach outlined below would do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pick an index that most suits your use case. In most cases, ordinary indexes would do. In others when we find ourselves searching for Boolean and other data, we employ a &lt;code&gt;FULLTEXT&lt;/code&gt; index. For more details, refer to &lt;a href="https://dev.to__GHOST_URL__/mysql-index-performance/"&gt;our indexing for high performance post&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Ensure your queries are designed properly – ensure that your &lt;code&gt;SELECT&lt;/code&gt; query only selects the fields that need to be returned instead of using &lt;code&gt;SELECT *&lt;/code&gt; and employ a &lt;code&gt;WHERE&lt;/code&gt; clause – if it’s not there, indexes are generally useless and won’t be used.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire situation could look something like this.&lt;/p&gt;

&lt;p&gt;a) Suppose you have five columns and you need to index three of them.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You decide to use a B-Tree index and index multiple columns at the same time.&lt;/li&gt;
&lt;li&gt;Put the column that you use straight after the &lt;code&gt;WHERE&lt;/code&gt; clause into the index first.&lt;/li&gt;
&lt;li&gt;If you use &lt;code&gt;ORDER BY&lt;/code&gt;, you should add the columns that you use this statement on afterwards. Do not use the columns that you’ve used in the previous step and do not change their order: that will make your query performance a little worse.&lt;/li&gt;
&lt;li&gt;If we want our query to be as fast as possible, we add all other columns that were left out after completing the third step.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The steps above describe a covering index. Covering indexes are a type of index where all of the columns required for a query to execute are included in the index and it's frequently added to a table when using &lt;code&gt;ALTER TABLE&lt;/code&gt; queries. Here's how one &lt;code&gt;ALTER TABLE&lt;/code&gt; query would look in such a scenario:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE demo_table ADD INDEX demo_idx(demo_c1, demo_c2, demo_c3);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;demo_c1&lt;/code&gt; is the column that goes straight after the &lt;code&gt;WHERE&lt;/code&gt; clause, &lt;code&gt;demo_c2&lt;/code&gt; is the column that goes straight after the &lt;code&gt;ORDER BY&lt;/code&gt; statement, and &lt;code&gt;demo_c3&lt;/code&gt; is our “remaining” column. That’s it: we now have a covering index! Now we will tell you what your &lt;code&gt;SELECT&lt;/code&gt; query would look like. In most cases, it would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT demo_c3 FROM demo_table WHERE demo_c1 = ‘Demo’ [ORDER BY demo_c2 ASC|DESC];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;demo_c1&lt;/code&gt;, in this case, would be the column that the search is being run on: &lt;code&gt;demo_c2&lt;/code&gt; would have integer values or dates, so MySQL could sort it. You get the idea.&lt;/p&gt;

&lt;p&gt;Designing indexes in such a way might not always be possible, however, always consider how much data you have and what kind of queries you run. &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;If you work with bigger data sets&lt;/a&gt; and indexing the entire column is simply not feasible, you might also find that it might be feasible to index a prefix of the data to make your query performance a little better: such an index won't occupy a big chunk of hard drive space and it will offer you slightly better performance as well.&lt;/p&gt;

&lt;p&gt;For many other cases, a mix of performance optimization and indexing will be required and your steps will require work with a B-Tree index. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get an approximate number of the rows that are being held in your table that you need to index and ensure the table is running InnoDB as the primary storage engine (MyISAM is known to be unreliable as it is prone to crashes and similar things.)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;Make sure your InnoDB engine is optimized for high performance by modifying your my.cnf file&lt;/a&gt; (modify my.ini if you intend to run Windows.)&lt;/li&gt;
&lt;li&gt;Only add the index on columns that you run &lt;code&gt;SELECT&lt;/code&gt; queries on. If you intend to &lt;code&gt;ALTER&lt;/code&gt; the table, ensure to do it &lt;em&gt;after&lt;/em&gt; you load data into it and not before as MySQL and its flavors make a copy of the table when altering it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Follow these steps, ensure you read on our &lt;a href="https://dev.to__GHOST_URL__/mysql-index-performance/"&gt;indexing&lt;/a&gt; and &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;optimizing for high performance&lt;/a&gt; tutorials, and you should be good to go in regards to your database performance, but if you see that your database performance is still not up to par, it might be time to glance into your database structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Arctype and Your Database Structure
&lt;/h2&gt;

&lt;p&gt;In case you’re not aware of it already, an ideal index is not really “ideal” if your database structure is messed up. If you fail to utilize basic concepts of &lt;a href="https://dev.to__GHOST_URL__/2nf-3nf-normalization-example/"&gt;normalization&lt;/a&gt; and don’t know the difference between normalization forms, you might face some problems as well. Thankfully, you don’t need to search far and wide in this space – &lt;a href="https://arctype.com/"&gt;Arctype&lt;/a&gt; is the tool that can help you take care of your database structure and indexes as well. Launch Arctype and you will instantly be suggested the steps you should take:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RHLIUk-h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/size/w1600/2021/10/image-55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RHLIUk-h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/size/w1600/2021/10/image-55.png" alt="" width="880" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on one of your tables, and you will be able to see their structure!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uluRB0xg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uluRB0xg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-56.png" alt="" width="549" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you see the structure of a given table, you will no longer be hard-pressed to decide what kind of indexes you want to employ. You will also be able to better choose the length of a given data type: you will have the columns and their length  already displayed in front of you. Can it get easier than that? Arctype will also display a list of tables on the left-hand side: expand those and you will be able to better understand what other tables consist of as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FUZUWBWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FUZUWBWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-57.png" alt="" width="377" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you write your queries, do not forget that the Arctype client also has a capability to share them amongst your team and colleagues and even enables you to edit your table data on-the-spot. Click on any index that is added onto a table, and you will see a query that will let you recreate it. Awesome, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--itwMdrvs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--itwMdrvs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-59.png" alt="" width="880" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, we will not spoil all the fun for you. &lt;a href="https://arctype.com"&gt;Try Arctype yourself&lt;/a&gt; and you will see what it can do: the features we have covered here are merely starting points, the client has a lot of other features to offer too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Deriving ideal indexes in any kind of database management system is not a very straightforward process, however, with enough knowledge about queries, database structure, and data sets in general we can make it work – we hope that this blog post has provided you with some information on that front.&lt;/p&gt;

&lt;p&gt;Keep in mind that we publish a wide variety of topics ranging from web development to analytics and dashboards, so if this article hasn’t yet hooked you into the Arctype world, make sure to have a read through our other articles inside the &lt;a href="https://arctype.com/blog"&gt;Arctype blog&lt;/a&gt; and you will be surprised with what you can find!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>sql</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Use Django and the Dev.to API to Display Posts</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Tue, 26 Jul 2022 20:50:37 +0000</pubDate>
      <link>https://dev.to/arctype/use-django-and-the-devto-api-to-display-posts-1pkg</link>
      <guid>https://dev.to/arctype/use-django-and-the-devto-api-to-display-posts-1pkg</guid>
      <description>&lt;p&gt;As a developer, you may be wondering if you need a blog. To be honest, I have been blogging for a while, and all I can tell you is that having a blog on your website has immense benefits. Let's just say that you have a website where you have a portfolio and you sell products or services. How can you drive more traffic to your website so that potential recruiters or customers can discover your products or services?&lt;/p&gt;

&lt;p&gt;The best way to drive traffic to your website is by having a blog with useful content. A blog with useful content shows your audience and customers that you are a trusted source of information.&lt;/p&gt;

&lt;p&gt;However, to have a blog on your website, you need a Content Management System which is abbreviated as CMS. A CMS is a software solution that helps someone create, manage, and modify content on a website without the need for specialized technical knowledge.&lt;/p&gt;

&lt;p&gt;Most CMSs are paid and if you want to save a few dollars, a paid CMS is the last thing you want. In this article, I will show you how to add a blog to your website without having to spend a cent. I will be using Django, the Dev.to API, PostgreSQL, and &lt;a href="https://arctype.com" rel="noopener noreferrer"&gt;the Arctype SQL Client&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Dev. to, Django, and Postgres?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/"&gt;Dev.to&lt;/a&gt; is a blogging platform where developers publish articles on topics that help other developers solve a problem or learn something new. Dev.to serves as a community of software developers getting together to help one another out. Django is a free and open-source Python-based web framework that follows the model-template-views architectural pattern. Django encourages rapid development and clean, pragmatic design.&lt;/p&gt;

&lt;p&gt;PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing SQL compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;First things first, to be able to follow through with this tutorial, you need a text editor installed on your machine. In this case, I will be using &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt; editor but you can use the code editor of your choice. Also, make sure that &lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;Python&lt;/a&gt; and &lt;a href="https://www.postgresql.org/download/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; are installed on your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;p&gt;To get started with this tutorial, create a folder named &lt;em&gt;Blog&lt;/em&gt; on your machine. Right-click on the folder and open it with the Visual Studio Code text editor or a text editor of your choice. When using Visual Studio Code, click the terminal option at the top and select &lt;em&gt;New Terminal&lt;/em&gt; as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FFOyy8BS3NDASQ8H2-eXMcTiIxz2eyhw0hj2zH-vzF-vfK0ge-w3n6V0BZyuT30b7WDvt4vgjHu_g7NJrIRCYVPebt9STilo9vcApp6-BwiE38jOWfc-rGwJP9IaU-voYGFd6b3CttS68pFkhRaA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FFOyy8BS3NDASQ8H2-eXMcTiIxz2eyhw0hj2zH-vzF-vfK0ge-w3n6V0BZyuT30b7WDvt4vgjHu_g7NJrIRCYVPebt9STilo9vcApp6-BwiE38jOWfc-rGwJP9IaU-voYGFd6b3CttS68pFkhRaA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us now install &lt;em&gt;Django&lt;/em&gt; in our Blog Django project. To install Django, we will use a packaging tool for python called &lt;em&gt;Pipenv&lt;/em&gt;. Pipenv is used to manage virtual environments in Django projects. You can install Pipenv to your machine by using &lt;a href="https://pipenv-fork.readthedocs.io/en/latest/install.html" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;. Now run the command below to install Django in your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv install django
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create a virtual environment in the project by running the command shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv shell
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should see parentheses around Blog on your command line as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;((Blog))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The parentheses around Blog indicates that the virtual environment has been activated.&lt;/p&gt;

&lt;p&gt;Let us now create a new Django project called Articles by running the command shown below.  Don’t forget the period (.) at the end.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;django-admin startproject Articles .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the command shown below and then visit &lt;a href="http://127.0.0.1:8000:" rel="noopener noreferrer"&gt;http://127.0.0.1:8000:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py runserver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the Django welcome page as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2Fbqp-gHViPHoAln8OnL_A04f6ztQ8AdzBzi8y54or2JDfEcSpX4rRotnVynUaEPV0uPmqcb2BOxdNsVo0NWw28NwasJT6-prYRS09G-3BVR3WH7sV1rWE7n306pOnH4Ivn4hXJu8bQBTIsMNiChY" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2Fbqp-gHViPHoAln8OnL_A04f6ztQ8AdzBzi8y54or2JDfEcSpX4rRotnVynUaEPV0uPmqcb2BOxdNsVo0NWw28NwasJT6-prYRS09G-3BVR3WH7sV1rWE7n306pOnH4Ivn4hXJu8bQBTIsMNiChY"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us now create an app called &lt;em&gt;posts&lt;/em&gt; in our Django project. From the command line, quit the server with &lt;em&gt;control+c.&lt;/em&gt; Then run the command shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py startapp posts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On your Visual Studio Code editor window, click the &lt;em&gt;Articles&lt;/em&gt; folder and open the &lt;em&gt;settings.py&lt;/em&gt; file as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2Fpu9dH-uDCKSBEvrux5EewWRrUWGItHTE8edmKrpJs0Rc47ydkczx_Zs0KPewxLlwQYwixsLPG3RQqb_R8FiJWMddxz7tx22zoDm10EhPsP8xYfcOJFzIbnOZjesuCa4TPsoMoUrj_Q8vLYANa8w" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2Fpu9dH-uDCKSBEvrux5EewWRrUWGItHTE8edmKrpJs0Rc47ydkczx_Zs0KPewxLlwQYwixsLPG3RQqb_R8FiJWMddxz7tx22zoDm10EhPsP8xYfcOJFzIbnOZjesuCa4TPsoMoUrj_Q8vLYANa8w"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the settings.py file, scroll down to the &lt;code&gt;INSTALLED_APPS&lt;/code&gt; section where you will see six built-in Django apps already there. Add our new posts app at the bottom as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSTALLED_APPS = [
   'django.contrib.admin',
   'django.contrib.auth',
   'django.contrib.contenttypes',
   'django.contrib.sessions',
   'django.contrib.messages',
   'django.contrib.staticfiles',
   'posts',  # new
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up Postgres
&lt;/h2&gt;

&lt;p&gt;After installing Postgres in your machine, run the command below to confirm the installation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should be able to see the version of Postgres installed into your system. Now run the command shown below to open Postgres prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -u postgres psql postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once PostgreSQL has opened on command line, run the command below to set a password for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\password postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To quit the PostgreSQL prompt, run the command shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\q
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To open the Postgres prompt again, run the command shown below and enter the password you set for postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -U postgres -W
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us now create a database in Postgres that will be used to store posts for our blog project by running the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE posts;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the command shown below to see if the database you created is on the list of databases in Postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\l
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the screenshot below, the post database was created in PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F31bOx1Z1YEMHO-2c97s20NNxKzaoGODale4G3ARxb8rXHo5Zv4Wj6biAuryugm8u4gm0Orih0bHbGu59ou4YLyzoayfM-bDmiGrxa9XtyGHS0LzVtgkQzccm2XKC9XrtmarFCelQsXE2qm1EWA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F31bOx1Z1YEMHO-2c97s20NNxKzaoGODale4G3ARxb8rXHo5Zv4Wj6biAuryugm8u4gm0Orih0bHbGu59ou4YLyzoayfM-bDmiGrxa9XtyGHS0LzVtgkQzccm2XKC9XrtmarFCelQsXE2qm1EWA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Postgres With Django
&lt;/h2&gt;

&lt;p&gt;Let us install the psycopg2 package that will allow us to use the Postgres database we created by running the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv install psycopg2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the settings.py file in the Articles folder and scroll down to DATABASES section that looks as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASES = {
   'default': {
       'ENGINE': 'django.db.backends.sqlite3',
       'NAME': BASE_DIR / 'db.sqlite3',
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The project is configured to use SQLite as a database. What you need to do is change this so that the project can use the Postgres database we created instead.&lt;/p&gt;

&lt;p&gt;Change the engine so that it uses the Postgres adaptor instead of the sqlite3 adaptor. For the NAME, use the name of your database where in this case it is &lt;em&gt;posts.&lt;/em&gt; You also need to add login credentials for your database.&lt;/p&gt;

&lt;p&gt;The USER should be postgres and PASSWORD should be the password you set for postgres. The DATABASES section of the settings.py file should now look as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASES = {
   'default': {
       'ENGINE': 'django.db.backends.postgresql',
       'NAME': 'posts',
       'USER': 'postgres',
       'PASSWORD': 'password',
       'HOST': 'localhost',
       'PORT': '',
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Django Models
&lt;/h2&gt;

&lt;p&gt;Let us now create a Django model that defines fields of data that we want stored in the Postgres database. To create the model, navigate to the &lt;em&gt;models.py&lt;/em&gt; file in the posts folder as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F6cegfA3MTuv8j92XFFkUKLVrhTnQCZoAX2f9_NU0takI1zR-hr6nZTGecS4YBpobDN9Y6i3o6bmYGwuy2BcvFfH4xTfus6WXvExzW6SSeaEPXIo2vsxU0Eqsj1Ctl6waxdzNsMBY3N8fbRNCOaU" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F6cegfA3MTuv8j92XFFkUKLVrhTnQCZoAX2f9_NU0takI1zR-hr6nZTGecS4YBpobDN9Y6i3o6bmYGwuy2BcvFfH4xTfus6WXvExzW6SSeaEPXIo2vsxU0Eqsj1Ctl6waxdzNsMBY3N8fbRNCOaU"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your models.py file should now look as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.db import models
import datetime


class Article(models.Model):
   title = models.TextField(blank=True)
   description = models.TextField(blank=True)
   cover_image = models.TextField(blank=True)
   article_body = models.TextField(blank=True)
   published_at = models.DateTimeField(
       default=datetime.date.today, blank=True, null=True)

   def __str__(self):
       return self.title
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us create and apply migrations to our database by running the commands below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py makemigrations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us now register the model by opening the &lt;em&gt;admin.py&lt;/em&gt; file in the posts folder. The admin.py file should now look as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.contrib import admin
from django.contrib.auth.admin import UserAdmin

from .models import Article


class ArticleAdmin(admin.ModelAdmin):
   list_display = ('id', 'title')


admin.site.register(Article, ArticleAdmin)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to create an admin account by running the command below. You will be asked to to select a username, provide an email address, choose and confirm a password for the account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py createsuperuser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Django Views
&lt;/h2&gt;

&lt;p&gt;Now visit &lt;a href="https://developers.forem.com/api/" rel="noopener noreferrer"&gt;Dev.to API documentation&lt;/a&gt; to learn how to get the API key together with various API endpoints. On the left side of the web page, select the &lt;em&gt;Authentication&lt;/em&gt; section to learn how to get an API key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2Fa0KJDzB4ecSjp0r2jWt1m6R4tv1HEzsF5KzWMvyNa4bz1WASd35-zURMXsMCyudd6bd8dCxNRCtV7yxgqQl1GKg4QcUUM17DdOTMU5yavCzPfA8k4AHEPQ4rhiFkvyVc3n5_gWvKPT1EbEsPEqo" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2Fa0KJDzB4ecSjp0r2jWt1m6R4tv1HEzsF5KzWMvyNa4bz1WASd35-zURMXsMCyudd6bd8dCxNRCtV7yxgqQl1GKg4QcUUM17DdOTMU5yavCzPfA8k4AHEPQ4rhiFkvyVc3n5_gWvKPT1EbEsPEqo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then below the authentication section, select articles section. Scroll down the drop-down menu to User’s published articles. This is the endpoint that allows the client to retrieve a list of published articles on behalf of an authenticated user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2Fg1PKb-vnliR9ou0AcGDsn5wxzBoIU6dvurwuziLEZ0Vf5IALAuROt-TWqgS5SRVxxmu4-1KaSkSL8toc8II_kBi2Cc4BhgNyAnOfGMwR20d7e1joMxwVToP-3TZcz3aTJ-JWAsWz8giR4wp-Z6k" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2Fg1PKb-vnliR9ou0AcGDsn5wxzBoIU6dvurwuziLEZ0Vf5IALAuROt-TWqgS5SRVxxmu4-1KaSkSL8toc8II_kBi2Cc4BhgNyAnOfGMwR20d7e1joMxwVToP-3TZcz3aTJ-JWAsWz8giR4wp-Z6k"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the right side of the page, you can see the request samples that you can make to the Dev.to API.&lt;/p&gt;

&lt;p&gt;Let us now create a python view function on the views.py file that makes http requests to Dev.to API as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.shortcuts import render
import requests
from .models import Article


def get_articles(request):
   all_articles = {}

   API_KEY = 'Your_API_KEY'

   url = 'https://dev.to/api/articles/me/published'

   headers = {'api-key': API_KEY}

   response = requests.get(url, headers=headers)

   data = response.json()

   for i, item in enumerate(data):
       article_data = Article(
           title=data[i]['title'],
           description=data[i]['description'],
           cover_image=data[i]['cover_image'],
           article_body=data[i]['body_markdown'],
           published_at=data[i]['published_at']
       )

       article_data.save()

       all_articles = Article.objects.all().order_by(
           '-published_at').distinct('published_at')

       return render(request, "blog.html", {"all_articles": all_articles})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to display the body of the article, we will need to create another view function that queries the article by id.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def blogBody(request, id):
   article = Article.objects.get(id=id)

   return render(request, "blogBody.html", {"article": article})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your views.py file should now look as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.shortcuts import render
import requests
from .models import Article


def get_articles(request):
   all_articles = {}

   API_KEY = 'Your_API_KEY'

   url = 'https://dev.to/api/articles/me/published'

   headers = {'api-key': API_KEY}

   response = requests.get(url, headers=headers)

   data = response.json()

   for i, item in enumerate(data):
       article_data = Article(
           title=data[i]['title'],
           description=data[i]['description'],
           cover_image=data[i]['cover_image'],
           article_body=data[i]['body_markdown'],
           published_at=data[i]['published_at']
       )

       article_data.save()

       all_articles = Article.objects.all().order_by(
           '-published_at').distinct('published_at')

       return render(request, "blog.html", {"all_articles": all_articles})


def blogBody(request, id):
   article = Article.objects.get(id=id)

   return render(request, "blogBody.html", {"article": article})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Django URLs
&lt;/h2&gt;

&lt;p&gt;Now we need to configure the urls of our project by creating a new urls.py file in the posts folder. The urls.py file should now look as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.urls import path
from .views import blogBody


urlpatterns = [
   path("blog", blog, name="blog"),
   path("article/&amp;lt;int:id&amp;gt;", blogBody, name="article"),
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last step is to update the Articles/urls.py file - the file should look as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.contrib import admin
from django.urls import path, include

urlpatterns = [
   path('admin/', admin.site.urls),
   path("", include("posts.urls"))
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Django Templates
&lt;/h2&gt;

&lt;p&gt;Create a directory called &lt;em&gt;templates&lt;/em&gt;, then create two HTML files called &lt;em&gt;blog.html&lt;/em&gt; and &lt;em&gt;blogBody.html&lt;/em&gt; like so.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir templates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch templates/blog.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch templates/blogBody.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us now update Articles/settings.py file to tell Django the location of our new templates directory. This is a one-line change to the setting ‘DIRS’ under the Templates section as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TEMPLATES = [
   {
       'BACKEND': 'django.template.backends.django.DjangoTemplates',
       'DIRS': [os.path.join(BASE_DIR, 'templates')], #new
       'APP_DIRS': True,
       'OPTIONS': {
           'context_processors': [
               'django.template.context_processors.debug',
               'django.template.context_processors.request',
               'django.contrib.auth.context_processors.auth',
               'django.contrib.messages.context_processors.messages',
           ],
       },
   },
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t forget to specify &lt;em&gt;import os&lt;/em&gt; at the top of the &lt;em&gt;settings.py&lt;/em&gt; file or you will get an error that os is not defined. Your templates/blog.html file should now look as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% for article in all_articles %}
&amp;lt;img src="{{article.cover_image}}" alt=""&amp;gt;
&amp;lt;h4&amp;gt;{{article.title}}&amp;lt;/h4&amp;gt;
&amp;lt;p&amp;gt;{{article.description}}&amp;lt;/p&amp;gt;
&amp;lt;a href="{% url 'article' article.id %}"&amp;gt;Read More...&amp;lt;/a&amp;gt;
{% endfor %}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to &lt;a href="http://127.0.0.1:8000/blog" rel="noopener noreferrer"&gt;127.0.01:800/blog&lt;/a&gt; and you should see articles from Dev To as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2Fjsk0q4_jGFfqEkjFmu01e1wwdQ32hd4XjHCXZTL3Lxo6hqSyvxCItAEWDQSftHbFz8oOdjey9NLxdUHXh9EanidxJBRJO6089Hj-rF-iwOWjZNMQFH4VtDM1n9NVFzCd59x60LwLkT7up1Q-N9s" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2Fjsk0q4_jGFfqEkjFmu01e1wwdQ32hd4XjHCXZTL3Lxo6hqSyvxCItAEWDQSftHbFz8oOdjey9NLxdUHXh9EanidxJBRJO6089Hj-rF-iwOWjZNMQFH4VtDM1n9NVFzCd59x60LwLkT7up1Q-N9s"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Article Body
&lt;/h2&gt;

&lt;p&gt;When we make the API call, the article body is fetched as &lt;code&gt;article markdown&lt;/code&gt;. The markdown is received from the API as a string. This means it will look like the raw content you enter on DEV To, rather than the preview/published version (i.e. with all the markdown syntax like ## for headings). To display it on your website as it appears on DEV To, you'll need to add an extra step to turn the string markdown into HTML. There are a lot of markdown parsing libraries that can do this for you but in this project, we will use a markdown parsing library for Django called &lt;code&gt;Markdown&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To use &lt;code&gt;Markdown&lt;/code&gt; we will need to install it into our project using the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv install markdown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us create a custom template filter that uses &lt;code&gt;Markdown&lt;/code&gt;. Create a &lt;code&gt;templatetags&lt;/code&gt; directory within our posts app and then a &lt;code&gt;markdown_extras.py&lt;/code&gt; file as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir posts/templatetags
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch posts/templatetags/markdown_extras.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file itself will import the markdown package and use the fenced code block extension.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# posts/templatetags/markdown_extras.py
from django import template
from django.template.defaultfilters import stringfilter

import markdown as md

register = template.Library()


@register.filter()
@stringfilter
def markdown(value):
   return md.markdown(value, extensions=['markdown.extensions.fenced_code'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let us load the custom filter into our template so that content written in Markdown will be outputted as HTML. Our &lt;code&gt;blogBody.html&lt;/code&gt; file should now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% load markdown_extras %}

&amp;lt;h4&amp;gt;{{article.title}}&amp;lt;/h4&amp;gt;
&amp;lt;img src="{{article.cover_image}}" alt=""&amp;gt;
&amp;lt;span&amp;gt;{{article.published_at }}&amp;lt;/span&amp;gt;
&amp;lt;p&amp;gt;{{article.article_body | markdown | safe}}&amp;lt;/p&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to &lt;a href="http://127.0.0.1:8000/blog" rel="noopener noreferrer"&gt;127.0.01:800/blog&lt;/a&gt; and click the read more button at the bottom. You should be redirected to the article body as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2F3vldvh7C5eVPw-OZZ-Ayxx5i88s_QEGIQxR4PH97ioZJAKLhU0ySiQ1fj__h6ACMWNLPd9gpMkAYfrVgZam9OwBI7qij1h4O5_BcWCVkeA6eBGl_13KxFIXbXZPZekKT-eV9cw0ggjDjVnmEt9w" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2F3vldvh7C5eVPw-OZZ-Ayxx5i88s_QEGIQxR4PH97ioZJAKLhU0ySiQ1fj__h6ACMWNLPd9gpMkAYfrVgZam9OwBI7qij1h4O5_BcWCVkeA6eBGl_13KxFIXbXZPZekKT-eV9cw0ggjDjVnmEt9w"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying Postgres Data Using Arctype SQL Client
&lt;/h2&gt;

&lt;p&gt;To view or interact with blog data stored in PostgreSQL, you can use a SQL client and database management tool like &lt;a href="https://arctype.com" rel="noopener noreferrer"&gt;Arctype&lt;/a&gt;. To use Arctype, navigate to Arctype homepage and download the Arctype app onto your machine. At present, Arctype supports Linux, Windows, and MacOS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FUc2bvUANl1etWO9-x5NQpS8D-eQZRyCbugES8XJ52S54xc_kfXz6VTR3VfGBbZeyyJaL5QaQLuLgEQ3Y4a7ORb3tDb2UBVtmbwphulawe4eiuUL7cDmbB1rIhYR8kLO6p6yDMgwWiRaChltZBwE" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FUc2bvUANl1etWO9-x5NQpS8D-eQZRyCbugES8XJ52S54xc_kfXz6VTR3VfGBbZeyyJaL5QaQLuLgEQ3Y4a7ORb3tDb2UBVtmbwphulawe4eiuUL7cDmbB1rIhYR8kLO6p6yDMgwWiRaChltZBwE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the app has been installed, open it and you will be prompted to create an account as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2Fx10vzJ6tN4X2F402INXFc64Mh9_Jqh9QwZvoKxbI_dv_GopP2GnfVEdimtjurnTOe6n6ecMT4DIYJNY6fAZHo1MGqUYKXh0b44_2mGZN2oG9b8465KudnQoGOw75qovwmB8ke95TD_8utuBDL6g" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2Fx10vzJ6tN4X2F402INXFc64Mh9_Jqh9QwZvoKxbI_dv_GopP2GnfVEdimtjurnTOe6n6ecMT4DIYJNY6fAZHo1MGqUYKXh0b44_2mGZN2oG9b8465KudnQoGOw75qovwmB8ke95TD_8utuBDL6g"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have created an account, you will be prompted to select your database. In this case, select Postgres.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F40Qnb0xsYFvY-pMrAwWPRwMv5gu_kZbIVM19ygcPgqVQTc1vw6uWQbMXbMO-IktLuXClX4eKmIHrYPtcxH6uzggEHmbsT6l63i_QAA1CHaMJXtPAywDB4fNTrX3oQRbq7KVY6y-Hxv3NGlMZapE" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F40Qnb0xsYFvY-pMrAwWPRwMv5gu_kZbIVM19ygcPgqVQTc1vw6uWQbMXbMO-IktLuXClX4eKmIHrYPtcxH6uzggEHmbsT6l63i_QAA1CHaMJXtPAywDB4fNTrX3oQRbq7KVY6y-Hxv3NGlMZapE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to add Postgres credentials to connect Arctype to your Postgres database. Use the credentials you created when created a database in the Postgres prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FEGLUmiLklr9b6auGOouoEEIyVYeQy-ICHlLQwS7g4NDuXQ8U-4Nk3V7K3vlfkifmkvV2N3ECFwcmp3WY90BH7dQbEtiPV8MO6XLsby5oWGRvar_rbm-NtaGjMHyY_FiNyAe0pnp-r1u-7k36dW8" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FEGLUmiLklr9b6auGOouoEEIyVYeQy-ICHlLQwS7g4NDuXQ8U-4Nk3V7K3vlfkifmkvV2N3ECFwcmp3WY90BH7dQbEtiPV8MO6XLsby5oWGRvar_rbm-NtaGjMHyY_FiNyAe0pnp-r1u-7k36dW8"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the database connection is successful, you should be able to access tables in the posts database in Postgres as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FraUzhdAZBsnSDBc-gcGXlMOlz-pD1qtVkCfNrJCzfG_MdK-FsmBgzVvWAEK0HrbDxiUr5zeTjqFVtYryKEtZyPeIRhH_E2RbRMtDsz8HX981jBc9Ul3UPcZtWY4H7zR82WJQQy8Cb9LEb-Gb9Vw" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FraUzhdAZBsnSDBc-gcGXlMOlz-pD1qtVkCfNrJCzfG_MdK-FsmBgzVvWAEK0HrbDxiUr5zeTjqFVtYryKEtZyPeIRhH_E2RbRMtDsz8HX981jBc9Ul3UPcZtWY4H7zR82WJQQy8Cb9LEb-Gb9Vw"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to learn how to create and run queries in Arctype, &lt;a href="https://docs.arctype.com/getting-started/making-a-query" rel="noopener noreferrer"&gt;check out this guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you've learned how to create a blog on your website without having to use a paid content management system or creating one from scratch. I've walked you through the steps to successfully display Dev.to posts on your blog using Django and the Dev.to API. All of the code provided in this blog &lt;a href="https://github.com/TheGreatBonnie/Blog" rel="noopener noreferrer"&gt;is available on GitHub&lt;/a&gt;, if you've enjoyed this article make sure to stick around &lt;a href="https://dev.to__GHOST_URL__/"&gt;the Arctype blog&lt;/a&gt; for more content, and until next time.&lt;/p&gt;

</description>
      <category>django</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>postgres</category>
    </item>
    <item>
      <title>When to Use JSON in Your Database</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Thu, 21 Jul 2022 21:17:00 +0000</pubDate>
      <link>https://dev.to/arctype/when-to-use-json-in-your-database-dn1</link>
      <guid>https://dev.to/arctype/when-to-use-json-in-your-database-dn1</guid>
      <description>&lt;p&gt;In this blog, you will learn when you should consider adding JSON data types to your tables and when you should avoid it.&lt;/p&gt;

&lt;p&gt;Most relational database technologies can now store JSON data and perform queries on it. The introduction of the JSON data type to relational databases represented a turning point and opened up many new possibilities. Considering the innovative nature of this feature, you might have several doubts about whether adopting JSON data is the right choice. In particular, the risk of abusing JSON and losing the benefits of the relational model is high. That is why having a clear understanding of the advantages and disadvantages that JSON data can bring to a relational database is so important.&lt;/p&gt;

&lt;p&gt;In this blog, we will explore everything you need to know about JSON columns and tell you when best to rely on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is JSON?
&lt;/h2&gt;

&lt;p&gt;JSON stands for&lt;a href="https://en.wikipedia.org/wiki/JSON"&gt; JavaScript Object Notation&lt;/a&gt; and represents a lightweight, easy-to-understand, human-readable data format. In a relational database, the JSON data type is a special binary or text column that allows you to store data in JSON format. Here's an example of JSON:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "user": {
    "id": "1",
    "name": "Maria",
    "surname": "Williams",
    "hobbies": [
      {
        "name": "Tennis",
        "since": 1985
      },
      {
        "name": "Reading",
        "since": 1974
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The relational databases that support JSON generally come with JSON-specific functions and operators to query and perform operations on this data type. For example, you can learn more about what PostgreSQL has to offer when it comes to JSON data &lt;a href="https://dev.to__GHOST_URL__/json-in-postgresql/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do Relational Databases Support JSON?
&lt;/h2&gt;

&lt;p&gt;Over time, the features offered to users have become increasingly complex. As a result, it is unlikely that the right data structure to support an application can be purely relational. The importance of data is increasing, and the way to represent it is evolving accordingly. Those who develop relational database technologies know this and have introduced the JSON data type to support the new way of thinking about data. &lt;/p&gt;

&lt;p&gt;MySQL introduced JSON support starting from &lt;a href="https://dev.mysql.com/doc/relnotes/mysql/5.7/en/"&gt;version 5.7&lt;/a&gt; which became available to the public in 2015. PostgreSQL supports JSON data type since&lt;a href="https://www.postgresql.org/about/news/postgresql-92-released-1415/"&gt; version 9.2&lt;/a&gt;, released in 2012. SQL Server has supported JSON since&lt;a href="https://docs.microsoft.com/en-us/sql/sql-server/sql-server-2016-release-notes?view=sql-server-2016"&gt; SQL Server 2016&lt;/a&gt;. As you can see, the majority of the most popular RDBMSes have the capability of storing JSON data. This does not mean that the relational model is dying or is no longer useful. On the contrary, it is simply evolving and adapting to new needs. &lt;/p&gt;

&lt;p&gt;After all, relational database technologies support JSON only as a data type. This means that you can add one or more JSON columns to your relational tables. This does not radically disrupt the relational approach and represents just a possibility, a useful and powerful feature to have.&lt;/p&gt;

&lt;h1&gt;
  
  
  When To Store JSON Data in a Relational Database?
&lt;/h1&gt;

&lt;p&gt;Embracing the JSON data type and adding JSON columns to your tables can bring several advantages to your relational schema. This is especially true when accomplishing a couple of goals defined below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using JSON for Logging Purposes
&lt;/h2&gt;

&lt;p&gt;You should be able to easily read and understand your log data, and the JSON format is a great way to store it. Therefore, you should consider turning your log data into JSON format and storing it in JSON columns. In addition, having a JSON column containing log data is an effective approach to keeping track of what happened just by looking at a row. With this approach, you do not have to define new tables, perform JOINs, or spend time retrieving the log data.&lt;/p&gt;

&lt;p&gt;JSON columns are also useful for logging where the data came from, especially when importing data via API. Considering that API responses are generally in JSON format, storing them in a JSON column is an effortless solution to not lose this data and make use of it when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Store Permissions and Configurations
&lt;/h2&gt;

&lt;p&gt;Not all users may have access to the same features and sections of your application. Similarly, each user might configure your application based on their preferences. These are two common scenarios and involve data that changes a lot over time. This is because your application is likely to evolve, involving new configurations, views, features, and sections. As a result, you have to continuously update your relational schema to match the new data structure. This takes time and energy.&lt;/p&gt;

&lt;p&gt;Instead, you can store permissions and configurations in a JSON column directly connected to your user table. Also, JSON is a good data format for your permissions and configuration. In fact, your application is likely to treat this data in JSON format.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Avoid Slow Performance on Highly Nested Data
&lt;/h2&gt;

&lt;p&gt;If performance is a key requirement and your application involves data that is highly nested, you should consider JSON columns. This is particularly true when you have to use it as one piece. In a relational approach, you would have to structure your data in several tables. To retrieve it, you would have to nest several &lt;code&gt;JOIN&lt;/code&gt;s, which would make queries very slow.&lt;/p&gt;

&lt;p&gt;On the contrary, by storing this nested data in a JSON column, you can retrieve it with a simple &lt;code&gt;SELECT&lt;/code&gt; query. This would make your data retrieval process faster while keeping your data structure easy.&lt;/p&gt;

&lt;h1&gt;
  
  
  When To Avoid JSON Data in a Relational Database?
&lt;/h1&gt;

&lt;p&gt;Some consider using the JSON data type as an antipattern and something to adopt sparingly. The reason is that using a format without constraints such as JSON can undermine your relational schema. In detail, you should avoid JSON in the following cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  You are Not Sure what Data to Store In the JSON Column
&lt;/h2&gt;

&lt;p&gt;The JSON format is flexible, but this does mean that you can use it to store whatever you want. Before adding JSON columns to your database, you must clearly define their purpose. Otherwise, they are likely to become junk drawers. If this happens, the consequences for the future of your application can be deadly. The only way to avoid it is by designing a data format for your JSON columns.&lt;/p&gt;

&lt;p&gt;Avoid using JSON columns if you do not know how to use them. Having JSON columns, but not storing JSON data in them means having no data validation on the database level, which can lead to no consistency or integrity at the application level.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Do Not Want to Deal With Complex Queries
&lt;/h2&gt;

&lt;p&gt;Storing data in JSON format is cool, but do not forget that it comes with additional costs and complications. One of these is that querying data in JSON columns leads to more complicated-looking queries.&lt;/p&gt;

&lt;p&gt;Here is what a query involving JSON columns looks like in PostgreSQL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y2OT_YFU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/_0VJUHarnZszLrGUxXRxNd7-HbQ5qyAgFzsviO4GDD1MEdRufixokquhn7h4tQIFCnybKchP1sluhIvsiAvXQSzHoPw6Y4NX0u7-1P3RYa0TFxe3aEgmyeq3SOmYco9gJsTD5HM__oKCK3WqxTpvhg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y2OT_YFU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/_0VJUHarnZszLrGUxXRxNd7-HbQ5qyAgFzsviO4GDD1MEdRufixokquhn7h4tQIFCnybKchP1sluhIvsiAvXQSzHoPw6Y4NX0u7-1P3RYa0TFxe3aEgmyeq3SOmYco9gJsTD5HM__oKCK3WqxTpvhg" alt="" width="708" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here's how the equivalent query looks like in a traditional scenario:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R-egKK1a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/lnFe6eX8pUX6WMvPTrfnJSQ2JjVss5T1gWTR9LFfDRSkt-eQ5m_NT9yZnJ8Y-XbF4gRqcRwfVzM1OHzOVbV1d8T8Lzwoel9PmK-Sy1iVSjO2yhXsslV7JPFACL-f2bThQRWsV9QWxkATGLsfe2cr8A" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R-egKK1a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/lnFe6eX8pUX6WMvPTrfnJSQ2JjVss5T1gWTR9LFfDRSkt-eQ5m_NT9yZnJ8Y-XbF4gRqcRwfVzM1OHzOVbV1d8T8Lzwoel9PmK-Sy1iVSjO2yhXsslV7JPFACL-f2bThQRWsV9QWxkATGLsfe2cr8A" alt="" width="599" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second one is much easier to read and understand - avoid using JSON if the benefit brought by JSON is not enough to justify the complexity of the new queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Have a Strongly Typed ORM
&lt;/h2&gt;

&lt;p&gt;When mapping your database with a strongly typed&lt;a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping"&gt; ORM&lt;/a&gt; technology, you may encounter problems involving data types. This is because data types at the database level are different from data types at the application level. When you want to deal with the data stored in JSON columns at the application level, you have two approaches.&lt;/p&gt;

&lt;p&gt;The first one is to define JSON columns in your ORM mapping as a string and then transform them into an object. Otherwise, if your ORM supports this option, you can map the column with the custom structured type. In either case, the application or the ORM must perform the data conversion behind the scenes. This comes at a cost and can compromise backend performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Data has evolved, and the relational model might no longer be enough to represent data efficiently. This is why the vendors of some of the most popular relational database technologies introduced ways to deal with unstructured data by using the JSON data type. Thanks to this, you can introduce the &lt;a href="https://dev.to__GHOST_URL__/relational-vs-document-vs-graph-database/"&gt;benefits of NoSQL into your relational database&lt;/a&gt;. Here, we looked at what JSON data type is, why it was introduced, and when to use or avoid it. Make sure to stick around our blog for more content, and until next time.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How To Design a Multi-Language Database</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Tue, 19 Jul 2022 21:28:32 +0000</pubDate>
      <link>https://dev.to/arctype/how-to-design-a-multi-language-database-1pid</link>
      <guid>https://dev.to/arctype/how-to-design-a-multi-language-database-1pid</guid>
      <description>&lt;p&gt;Today we look at three best-practice database designs to store data in multiple languages and easily scale to new markets.&lt;/p&gt;

&lt;p&gt;Reaching &lt;a href="https://dev.to__GHOST_URL__/sql-hierarchy/"&gt;millions of users&lt;/a&gt; with an application is every developers dream. Achieving this goal becomes easier if users from all over the world can use your application. Since not all users know English or your target language, you need to design your application to be multilingual. This would not be possible without a multi-language database.&lt;/p&gt;

&lt;p&gt;Designing a multi-language database that can easily scale to new languages is not easy. This is why you should rely on &lt;a href="https://dev.to__GHOST_URL__/optimize-sql-query/"&gt;best practices&lt;/a&gt;. Here, you will see why you need a multi-language database, three great multi-language database designs, and how to choose the best one for you.&lt;/p&gt;

&lt;p&gt;Let’s now learn everything you need to know about multi-language database design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do You Need a Multi-Language Database?
&lt;/h2&gt;

&lt;p&gt;You never know how much your project will grow, and your small app for your friends could become an international service used by millions of people. If you want your product to be able to scale internationally, you need to design it so that you can easily adapt it to different cultures and markets. That is what internationalization is about.&lt;/p&gt;

&lt;p&gt;Building a database ready for internationalization means designing a database that can store multilingual data. In other words, the backend should be able to provide data in multiple languages. To do this, the backend should connect and retrieve this data from a multi-language database.&lt;/p&gt;

&lt;p&gt;Notice that giving users the possibility to switch between several languages is a nice feature to have. This is especially useful for polyglots or non-native speakers. So, even if your project is small and targets a local market, you should consider a multi-language database. After all, you cannot know in advance how successful your project will be and which users will use it. Let’s now learn why choosing the right multi-language database design for your needs is crucial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why You Should Design Your Multi-Language Database Carefully
&lt;/h2&gt;

&lt;p&gt;Changing a database is a very costly operation in terms of time and energy because it triggers a ripple effect. Modifying the structure of the database involves changing the backends connected to it. This may also require you to adapt the frontends that rely on those backends accordingly. As you can see, changing a database structure is not an operation without consequences. For this reason, you should design a database that can scale easily and you do not have to change frequently. &lt;/p&gt;

&lt;p&gt;As you can imagine, there are several ways to design a multi-language database, and each solution has its pros and cons. Considering the importance of the database structure for an application, you have to design your multi-language database wisely. This is why you should not start from scratch, but rely on best practices.&lt;/p&gt;

&lt;p&gt;Let’s now take a look at some multi-language designs based on best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 Multi-Language Database Designs
&lt;/h2&gt;

&lt;p&gt;Let’s delve deeper into the pros and cons of three multi-language designs I personally used in my experience as a full-stack web developer &lt;a href="https://dev.to__GHOST_URL__/sql-hierarchy/"&gt;working for startups all over the world&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Column Approach
&lt;/h2&gt;

&lt;p&gt;In this approach, each field in a multilingual table has a number of columns equal to the number of languages supported by the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FlO_hENhlz7_llBeJR0LYR0Sdr4zyFGSlmJxScQocqVxKjeQ4Z08Hu9rD7r7-1G6QWGqubaX1GeC2geU5H3D2HShk_x9I57mB7SXPUrGO4VQ3U_v_9aKMotZEQUfwhBNyvuOjioEb8O82dmiE_BigU78" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FlO_hENhlz7_llBeJR0LYR0Sdr4zyFGSlmJxScQocqVxKjeQ4Z08Hu9rD7r7-1G6QWGqubaX1GeC2geU5H3D2HShk_x9I57mB7SXPUrGO4VQ3U_v_9aKMotZEQUfwhBNyvuOjioEb8O82dmiE_BigU78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Specifically, this is what the column name template looks like: &lt;code&gt;columnName_languageCode&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple&lt;/strong&gt;: it is easy to implement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt;: it does not involve &lt;code&gt;JOIN&lt;/code&gt; or slow queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy to deal with non-translated fields&lt;/strong&gt;: if the translation for a field is missing, you can just use &lt;a href="https://dev.to__GHOST_URL__/null-in-sql/#coalesce-function"&gt;&lt;code&gt;COALESCE&lt;/code&gt;&lt;/a&gt;. E.g. &lt;code&gt;COALESCE(name_it, name_en)&lt;/code&gt; → returns &lt;code&gt;name_it&lt;/code&gt; if it is not &lt;code&gt;NULL&lt;/code&gt;, otherwise the &lt;code&gt;name_en&lt;/code&gt; default value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hard to maintain&lt;/strong&gt;: adding a new language requires updating all multi-language tables in the database. This also means that you need to change your ORM mappings accordingly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not scalable:&lt;/strong&gt; the number of columns in tables grows with the number of languages supported by the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex &lt;code&gt;SELECT&lt;/code&gt; conditions&lt;/strong&gt;: forget about &lt;code&gt;SELECT *&lt;/code&gt;. You need to specify each column in your &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/select.html" rel="noopener noreferrer"&gt;&lt;code&gt;SELECT&lt;/code&gt; clause&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Row Approach
&lt;/h2&gt;

&lt;p&gt;In this approach, there is one row for each language. The key to identify a multi-language entity is the following a &lt;a href="https://dev.to__GHOST_URL__/postgres-uuid/#creating-a-composite-primary-key"&gt;composite primary key&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;id, languageCode&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F5F239n1A22ypI-4X-T84gYRcGk9pFNDd10GCzsUGapxV03j9WKB4KX5kZ7uBIZTA4aaUj_5_dsQVexBpWuvqB7wjb50JBQoQM-SNxpXqXNhBrB4h_K0XUSJQDN5IrTDiRM581kBt0nbCzdMLiYNHik0" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F5F239n1A22ypI-4X-T84gYRcGk9pFNDd10GCzsUGapxV03j9WKB4KX5kZ7uBIZTA4aaUj_5_dsQVexBpWuvqB7wjb50JBQoQM-SNxpXqXNhBrB4h_K0XUSJQDN5IrTDiRM581kBt0nbCzdMLiYNHik0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple&lt;/strong&gt;: it is easy to implement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt;: retrieving the translated content only requires a &lt;code&gt;WHERE&lt;/code&gt; condition on &lt;code&gt;languageCode&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complex keys&lt;/strong&gt;: using a composite primary key makes identifying an element and &lt;a href="https://dev.to__GHOST_URL__/mysql-join/"&gt;&lt;code&gt;JOIN&lt;/code&gt; queries&lt;/a&gt; more complex.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duplicated content&lt;/strong&gt;: to simplify things, non-translated columns generally store the same content saved in the columns of the default language row. This means that you will have a lot of duplicate content in multi-language tables.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Translation Table Approach
&lt;/h2&gt;

&lt;p&gt;In this approach, each column of a multi-language table is an external key to a translation table. In other words, there is a translation table for each table that involve multi-language fields.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FI-faz8NeC16S9ChQwdXNRl8hoaFBxYVPlIICFW3FRnTZ1TpX_b3iNVGEyCWyfCa4nGGZ9ikzoF8mFbTQILf_jEsFLMGWtGynoIQ4w_SzhKog-OMoB8xC2NOLAP07bocy7S7iJ_zVcvU0eTS01desgpQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FI-faz8NeC16S9ChQwdXNRl8hoaFBxYVPlIICFW3FRnTZ1TpX_b3iNVGEyCWyfCa4nGGZ9ikzoF8mFbTQILf_jEsFLMGWtGynoIQ4w_SzhKog-OMoB8xC2NOLAP07bocy7S7iJ_zVcvU0eTS01desgpQ"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FLLEqH7nh1Uh6YN7dQ8CDp6NFG91vDupanKLBtqV0OSqnUzB3ZS2F550LRhc7S1x_saBAY0AUZyoI5h8qLBelmLIS4-iFLTtbRNVfGQiyVVhHg1m-iYBefKb4tCxAJjiIiWtqMvC7phgTa17eBEShGo4" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FLLEqH7nh1Uh6YN7dQ8CDp6NFG91vDupanKLBtqV0OSqnUzB3ZS2F550LRhc7S1x_saBAY0AUZyoI5h8qLBelmLIS4-iFLTtbRNVfGQiyVVhHg1m-iYBefKb4tCxAJjiIiWtqMvC7phgTa17eBEShGo4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalable&lt;/strong&gt;: adding a new language does not involve changes to the database structure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supports historical data&lt;/strong&gt;: you can use the translation table to keep track of the translation history for each field in an entity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized&lt;/strong&gt;: all translations for an entity are stored in one place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complex queries&lt;/strong&gt;: queries become more complex because the information associated with a multi-language entity is spread over two tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slow queries&lt;/strong&gt;: retrieving all the info associated with an entity requires as many &lt;code&gt;JOIN&lt;/code&gt;s as the number of columns. Considering the burden that &lt;code&gt;JOIN&lt;/code&gt;s have on performance, this can easily become a problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duplicated tables&lt;/strong&gt;: it increases the size of your database in terms of tables. Plus, you need two tables to define each multi-language entity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Is the Best Multi-Language Database Design for You?
&lt;/h2&gt;

&lt;p&gt;As you can see, every multi-language database design has the same amount of pros and cons. This means that there is no preferred approach in absolute terms. You have to choose carefully based on your requirements, needs, and goals. I have personally used all three approaches in different projects and learned the lessons I've shared. &lt;/p&gt;

&lt;p&gt;The column approach is particularly useful when dealing with &lt;a href="https://dev.to__GHOST_URL__/mysql-storage-engine-big-data/"&gt;Big Data&lt;/a&gt; especially from the perspective of having a database with &lt;a href="https://dev.to__GHOST_URL__/partition-tables-sql/"&gt;tables with several millions of rows&lt;/a&gt; when you simply cannot afford &lt;code&gt;JOIN&lt;/code&gt; queries or duplicated content. Although it is not the most scalable solution, the column approach is the only viable design with large data. &lt;/p&gt;

&lt;p&gt;On the other hand, the row-by-row approach is useful when the branches of your company branches operating in the local market have freedom of action. In this case, duplication of content can even be an advantage.&lt;/p&gt;

&lt;p&gt;Finally, the tabular approach is the right design if you are looking for an elegant and scalable solution and do not mind the performance drawback. The table approach is best for projects where you know that the data will not grow too large.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Designing a database that can store multilingual data is essential to help your business scale internationally. Also, it enables non-native speakers to use your application. Changing a database is a complex and time-consuming operation with non-negligible consequences on the application architecture. This is why you should design your database as multi-language right from the start. Here, we have looked at three different approaches to designing a multilingual database, studying their pros and cons, and going into detail about which one is best for you.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The Darker Side of ALTER TABLE: A Guide</title>
      <dc:creator>Arctype Team</dc:creator>
      <pubDate>Thu, 14 Jul 2022 20:02:53 +0000</pubDate>
      <link>https://dev.to/arctype/the-darker-side-of-alter-table-a-guide-50j5</link>
      <guid>https://dev.to/arctype/the-darker-side-of-alter-table-a-guide-50j5</guid>
      <description>&lt;p&gt;If you frequently find yourself immersed into the MySQL world, there’s probably no need to explain to you what SQL queries are and what they do. Chances are, you know them like your five fingers. &lt;code&gt;SELECT&lt;/code&gt; queries allow us to read data, &lt;code&gt;INSERT&lt;/code&gt; queries allow us to insert data into a given database, &lt;code&gt;UPDATE&lt;/code&gt; queries allow us to update data for certain rows or for the entire table if we wish, &lt;code&gt;DELETE&lt;/code&gt; queries let us delete records from a table, etc. If you’ve ever dug deeper into the world of MySQL though, you must know that there is one more query that is very important for both database administrators and developers – that query is &lt;code&gt;ALTER TABLE&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ALTER TABLE?
&lt;/h2&gt;

&lt;p&gt;Simply put, &lt;code&gt;ALTER TABLE&lt;/code&gt; is a query in MySQL that allows us to modify (or &lt;code&gt;ALTER&lt;/code&gt;) the structure of a table. The things that &lt;code&gt;ALTER TABLE&lt;/code&gt; can do are numerous – &lt;code&gt;ALTER TABLE&lt;/code&gt; can be used to change names of tables or columns, or add or modify columns as well. On its end though, &lt;code&gt;ALTER TABLE&lt;/code&gt; is mostly used when indexes are involved – when developers need to add indexes, they change the structure of their tables, and that's where &lt;code&gt;ALTER TABLE&lt;/code&gt; comes into play once again.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Speed of ALTER TABLE
&lt;/h2&gt;

&lt;p&gt;The speed of &lt;code&gt;ALTER TABLE&lt;/code&gt; statements is, obviously, directly dependent on the size of the tables or columns we want to modify – however, there’s one caveat. &lt;code&gt;ALTER TABLE&lt;/code&gt; performs all of its operations &lt;em&gt;on a copy of the table, and not on the table itself&lt;/em&gt;. In other words, when MySQL is ordered to modify a table and an &lt;code&gt;ALTER TABLE&lt;/code&gt; statement is run, MySQL makes a copy of the table that is being modified, inserts the data that we currently have inside of our table into it, performs all of the required operations there, and copies the data back into our table – only then we see the results.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ALTER TABLE&lt;/code&gt; might work a little differently if we are not using InnoDB as our primary storage engine of choice – MyISAM will not show the rows that are being inserted into table until all operations have been completed, but nonetheless, the core premise remains the same. The speed of such a statement is directly dependent on the factors outlined below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The structure of our database tables.&lt;/li&gt;
&lt;li&gt;The size of our tables.&lt;/li&gt;
&lt;li&gt;Our web and database server.&lt;/li&gt;
&lt;li&gt;Our MySQL configuration.&lt;/li&gt;
&lt;li&gt;The way we run queries.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First off, the structure of our tables is important because queries involving columns that were not “cared for” (indexed, etc.) properly will be slower than they should be. Also, keep in mind that if our tables were not using normalization, our queries will be slower than they should be, especially if our tables are big. Imagine copying possibly millions of unnecessary rows into another table…: sounds slow, doesn't it?&lt;/p&gt;

&lt;p&gt;Secondly, the size of our tables does indeed matter because the bigger our tables are, the longer time they will take to copy over. The second point is also heavily related to all of the other points because we also have to think about how our web server is configured and about the way we run our queries for them to complete without issues and as be as quick as possible: for example, if our database server is configured to use 80% of available operating memory, chances are that our queries would probably complete a whole lot faster than with the default configuration. Also, if we run multiple &lt;code&gt;ALTER TABLE&lt;/code&gt; queries one after another and are low on disk space, we would risk running out of it altogether. Allow us to illustrate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dark Side of ALTER TABLE
&lt;/h2&gt;

&lt;p&gt;Let’s say we have a dedicated server with 16GB RAM and 100GB of hard drive space. A good DBA would probably configure it to occupy 8GB or more of operating memory, and suppose that the table that we want to run &lt;code&gt;ALTER TABLE&lt;/code&gt; queries on consists of 100 million records and isn’t normalized. All in all, everything that this table consists of occupies 20GB of space on the disk.&lt;/p&gt;

&lt;p&gt;At this point, we might probably have a couple of other tables we work with – say, they occupy another 50GB of hard drive space, but you get the point. This is the query we run: feel free to adapt, then copy and paste it into your &lt;a href="https://arctype.com/"&gt;Arctype client&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ALTER TABLE demo_table MODIFY demo_column VARCHAR(150) NOT NULL AFTER another_column;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jLwZEvPh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jLwZEvPh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-61.png" alt="" width="880" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, MySQL would make a copy of the table on the disk, then insert the current data into it, perform all of the necessary operations, and only then swap the old and new tables together: remember that we have 30GB of disk space left? After the temporary table would be created, we would have 10GB left. By then, we would probably start praying that the index occupies less space than 10GB and our queries complete successfully. If we run out of disk space, our queries get interrupted. Do we really want to see results like this? Of course not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoiding Problems with ALTER TABLE
&lt;/h2&gt;

&lt;p&gt;As you can see, once &lt;code&gt;ALTER TABLE&lt;/code&gt; gets deeper involved in our database processes, we may very well be faced with multiple problems. How do we solve them?&lt;/p&gt;

&lt;p&gt;One way would be to use tools that make a copy of the tables and modify it using triggers to keep it accessible while the new table is being prepared: in other words, to change the structure of tables without locking them up: for a full guide, head over &lt;a href="https://www.percona.com/doc/percona-toolkit/3.0/pt-online-schema-change.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another way we can accomplish such a task would be manual, meaning that we can also use an approach like so:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We need to ensure that &lt;code&gt;INSERT&lt;/code&gt; statements would be quick – the table that we create should not have any indexes, and our MySQL instances should be configured properly.&lt;/li&gt;
&lt;li&gt;We need to create a new table.&lt;/li&gt;
&lt;li&gt;Once our table is created, we need to copy the data over from our old table into our newly created table using &lt;code&gt;INSERT INTO demo2 SELECT * FROM demo&lt;/code&gt; where &lt;code&gt;demo2&lt;/code&gt; is the new table and &lt;code&gt;demo&lt;/code&gt; is the name of the old table. Before doing that though, ensure that the structure of both tables are the same (see image below point #5.)&lt;/li&gt;
&lt;li&gt;We remove (drop) the old table: &lt;code&gt;DROP TABLE demo&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We rename the new table to the name of the old table: &lt;code&gt;ALTER TABLE demo2 RENAME TO demo&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xvUF5ELC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xvUF5ELC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-64.png" alt="" width="880" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we prefer to complete steps manually, such an approach will do. However, do keep in mind that such an approach is pretty much what &lt;code&gt;ALTER TABLE&lt;/code&gt; does in the first place, and that in some corner cases, &lt;code&gt;DROP TABLE&lt;/code&gt; might become awfully slow to execute – in that case, kill the query by using &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; and then &lt;code&gt;KILL&lt;/code&gt; a specific query ID. Then, execute a &lt;code&gt;TRUNCATE TABLE&lt;/code&gt; statement – truncating tables is usually way faster than removing them altogether.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sCUm7g7j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sCUm7g7j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://arctype.com/blog/content/images/2021/10/image-62.png" alt="" width="880" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it – problem solved!&lt;/p&gt;

&lt;p&gt;You might also have noticed that we’ve used the &lt;a href="https://arctype.com/"&gt;Arctype client&lt;/a&gt; in the majority of examples in this blog – and that’s not without a reason. Arctype builds an advanced SQL client that can help us run SQL queries, complete parts of them automatically, share the results of queries with a team, modify our table structure “on-the-fly” and even build custom dashboards. Can it get better? It really does not. Try the Arctype client today, take care of your data, and normalize your databases and schemas, of course, use &lt;code&gt;ALTER TABLE&lt;/code&gt; queries cautiously, and we will see you in the next blog!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>sql</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
