<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniil Bazhenov</title>
    <description>The latest articles on DEV Community by Daniil Bazhenov (@dbazhenov).</description>
    <link>https://dev.to/dbazhenov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dbazhenov"/>
    <language>en</language>
    <item>
      <title>Securing Your Data in PostgreSQL Using PG_TDE for Encryption – Beginner-Friendly Guide</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Thu, 04 Sep 2025 13:29:51 +0000</pubDate>
      <link>https://dev.to/dbazhenov/securing-your-data-in-postgresql-using-pgtde-for-encryption-beginner-friendly-guide-5acc</link>
      <guid>https://dev.to/dbazhenov/securing-your-data-in-postgresql-using-pgtde-for-encryption-beginner-friendly-guide-5acc</guid>
      <description>&lt;p&gt;Let’s explore how to encrypt data in PostgreSQL — not just at the application level, but directly within the database engine itself. Transparent Data Encryption (TDE) allows you to protect sensitive information by encrypting it at the storage layer, without changing how your application interacts with the database.&lt;/p&gt;

&lt;p&gt;In this walkthrough, we’ll use &lt;a href="https://docs.percona.com/pg-tde/index.html" rel="noopener noreferrer"&gt;PG_TDE&lt;/a&gt;, an open-source extension that brings TDE capabilities to PostgreSQL. It encrypts table data, indexes, TOAST storage, temporary tables — and now even WAL (Write-Ahead Logging), meaning changes are protected from the moment they’re written to disk.&lt;/p&gt;

&lt;p&gt;The key advantage of TDE is that it works transparently: no changes are needed in your application code or SQL queries. PostgreSQL continues to operate as usual, while encryption and decryption happen behind the scenes. This is especially valuable in scenarios where an attacker gains access to the underlying storage — without TDE, raw database files can be read directly; with TDE, they’re encrypted and unreadable without the proper keys.&lt;/p&gt;

&lt;p&gt;We’ll set up PostgreSQL with PG_TDE in Docker, configure encryption keys, create encrypted tables, and inspect how the data is stored. While PG_TDE supports multiple key management options, we’ll start with a simple keyring file provider — ideal for local testing and experimentation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The keyring file is intended for development only. In production, you should use a secure external key store such as HashiCorp Vault or KMIP.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Installing PostgreSQL with PG_TDE via Docker
&lt;/h2&gt;

&lt;p&gt;PG_TDE is part of the &lt;a href="https://docs.percona.com/postgresql/" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL&lt;/a&gt; — an enhanced PostgreSQL distribution that includes tools for monitoring, auditing, replication, and of course, Transparent Data Encryption. We’ll use the official &lt;a href="https://docs.percona.com/postgresql/17/docker.html" rel="noopener noreferrer"&gt;Docker image&lt;/a&gt;, which already contains everything needed to run PG_TDE.&lt;/p&gt;

&lt;p&gt;Let’s start by launching the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name pg-tde \
  -e POSTGRES_PASSWORD=secret \
  -e ENABLE_PG_TDE=1 \
  -p 5432:5432 \
  -d percona/percona-distribution-postgresql:17.5-3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;POSTGRES_PASSWORD=secret&lt;/code&gt; — sets the superuser password&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ENABLE_PG_TDE=1&lt;/code&gt; — enables Transparent Data Encryption support&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Port 5432 — standard for PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After launching, connect to PostgreSQL inside the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it pg-tde psql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And activate the PG_TDE extension:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION pg_tde;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that PG_TDE is enabled and check its version using the SQL function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT pg_tde_version();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the result at the time of writing — I’m using PG_TDE 2.0&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajsxutp302apuosqa24b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajsxutp302apuosqa24b.png" alt="Postgres - TDE - Docker - Version" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also connected to PostgreSQL using pgAdmin, accessing via localhost, user postgres, and the password from POSTGRES_PASSWORD.&lt;/p&gt;

&lt;p&gt;pgAdmin is very convenient for exploring the database via UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyor8by1yagi32yuqaw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyor8by1yagi32yuqaw8.png" alt="Postgres - TDE - pgAdmin" width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now TDE is ready — we can create encrypted tables, manage keys, and verify how data is protected at the storage level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Encryption Keys with PG_TDE
&lt;/h2&gt;

&lt;p&gt;After installing and activating the pg_tde extension, the next step is to configure the keys that will be used for data encryption. In this example, we’re using a key provider based on a keyring file — a simple and fast way to start working with PG_TDE in development mode.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: The keyring file stores keys in unencrypted form and is intended for testing and development only. For production, it’s recommended to use an external key store such as HashiCorp Vault — we’ll cover that in the next post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Add the key provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT pg_tde_add_database_key_provider_file(
    'file-vault',
    '/tmp/pg_tde_local_keyring.per'
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Registers a key provider named 'file-vault'&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specifies that keys will be stored in the file located at &lt;code&gt;/tmp/pg_tde_local_keyring.per&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The file path can be anything — you can use a different location or filename, as long as PostgreSQL has access to it. If the file doesn’t exist, it will be created when the first key is generated&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s important not to lose this file — it contains the encryption keys. If needed, it can be placed in a local folder and mounted into the container via Docker, but for this task, using a path inside the container is perfectly fine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Create the key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT pg_tde_create_key_using_database_key_provider(
    'test-db-key', 
    'file-vault'
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re creating a key named &lt;code&gt;test-db-key&lt;/code&gt; inside the specified key provider. This key will be used to encrypt tables.&lt;/p&gt;

&lt;p&gt;Set the active key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT pg_tde_set_key_using_database_key_provider(
  'test-db-key',
  'file-vault'
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sets &lt;code&gt;test-db-key&lt;/code&gt; as the current active key, which will be used when creating new encrypted tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0k4ag7gr7tomyhza87w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0k4ag7gr7tomyhza87w.png" alt="Postgres - PG_TDE - Docker - Key creation" width="800" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s verify.&lt;/p&gt;

&lt;p&gt;Retrieve key information using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT pg_tde_key_info();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# SELECT pg_tde_key_info();
                      pg_tde_key_info
------------------------------------------------------------
 (test-db-key,file-vault,1,"2025-09-04 07:16:04.420629+00")
(1 row)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the contents of the keyring file — just for experimentation.&lt;/p&gt;

&lt;p&gt;In a separate terminal tab, connect to the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it pg-tde bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use &lt;code&gt;hexdump&lt;/code&gt; to read the file since it contains binary data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hexdump -C /tmp/pg_tde_local_keyring.per | head
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will output the first bytes of the file in hex format, allowing you to confirm that the file is not empty and the key is indeed recorded.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8jyce44yz973u9tl6x4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8jyce44yz973u9tl6x4.png" alt="Postgres - TDE - Docker - Key file" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the file is empty or unchanged — the key may not have been created or written.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Tables with Data
&lt;/h2&gt;

&lt;p&gt;Now that the encryption key is set, let’s create two tables: one with encryption enabled, and one regular. This will help visually demonstrate how PG_TDE works and how data storage differs.&lt;/p&gt;

&lt;p&gt;Create the encrypted table by specifying &lt;code&gt;USING tde_heap&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE secure_data (
  id SERIAL PRIMARY KEY,
  secret TEXT,
  created_at DATE
) USING tde_heap;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a few rows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSERT INTO secure_data (secret, created_at) VALUES
  ('The launch code is hidden in plain sight.', '2025-09-01'),
  ('Trust no one. The truth is encrypted.', '2025-09-02'),
  ('The treasure lies beneath the third stone by the old oak.', '2025-09-03');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a similar table without encryption:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE plain_data (
  id SERIAL PRIMARY KEY,
  secret TEXT,
  created_at DATE
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Insert the same data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSERT INTO plain_data (secret, created_at) VALUES
  ('The launch code is hidden in plain sight.', '2025-09-01'),
  ('Trust no one. The truth is encrypted.', '2025-09-02'),
  ('The treasure lies beneath the third stone by the old oak.', '2025-09-03');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now both tables contain identical rows, but one stores the data in encrypted form.&lt;/p&gt;

&lt;p&gt;Let’s check the encryption status using PG_TDE.&lt;/p&gt;

&lt;p&gt;Use the built-in function &lt;code&gt;pg_tde_is_encrypted&lt;/code&gt;, which returns true (&lt;code&gt;t&lt;/code&gt;) if the table uses encryption, and false (&lt;code&gt;f&lt;/code&gt;) if not:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# SELECT pg_tde_is_encrypted('secure_data');
 pg_tde_is_encrypted
---------------------
 t
(1 row)

postgres=# SELECT pg_tde_is_encrypted('plain_data');
 pg_tde_is_encrypted
---------------------
 f
(1 row)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Excellent, now we can move on to comparison: let’s see how the data looks inside PostgreSQL — and how it’s stored on disk. This will help confirm that PG_TDE truly protects content, even if someone gains direct access to the files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing Tables: Encrypted vs Unencrypted
&lt;/h2&gt;

&lt;p&gt;At the SQL query level, both tables — secure_data (encrypted) and plain_data (unencrypted) — look absolutely identical. This is important: PG_TDE does not affect SQL interface behavior, and the application continues to work with the data as usual.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# SELECT * FROM secure_data;
 id |                          secret                           | created_at
----+-----------------------------------------------------------+------------
  1 | The launch code is hidden in plain sight.                 | 2025-09-01
  2 | Trust no one. The truth is encrypted.                     | 2025-09-02
  3 | The treasure lies beneath the third stone by the old oak. | 2025-09-03
(3 rows)

postgres=# SELECT * FROM plain_data;
 id |                          secret                           | created_at
----+-----------------------------------------------------------+------------
  1 | The launch code is hidden in plain sight.                 | 2025-09-01
  2 | Trust no one. The truth is encrypted.                     | 2025-09-02
  3 | The treasure lies beneath the third stone by the old oak. | 2025-09-03
(3 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Display in the application using pgAdmin
&lt;/h3&gt;

&lt;p&gt;Data from the &lt;code&gt;plain_data&lt;/code&gt; table:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66hv6jec0hqzfn6nte70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66hv6jec0hqzfn6nte70.png" alt="PG_TDE - Docker - Postgres plain_data" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data from the encrypted table secure_data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7cwlmbb5tl0emux6nf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7cwlmbb5tl0emux6nf1.png" alt="PG_TDE - Docker - Postgres secure_data" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pgAdmin screenshots also show no differences: data is displayed in readable form, regardless of whether the table is encrypted. This confirms that PG_TDE operates at the storage level, without affecting data handling logic.&lt;/p&gt;

&lt;p&gt;As you can see, the result is identical: encryption is transparent to the user and application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing Table File Contents
&lt;/h2&gt;

&lt;p&gt;Now let’s verify that PG_TDE truly encrypts data at the storage level. To do this, we’ll locate the physical table files and compare their contents using hexdump.&lt;/p&gt;

&lt;p&gt;In PostgreSQL, each table is physically stored as one or more files in the base directory, inside the folder corresponding to the database. The file path looks like &lt;code&gt;$PGDATA/base/&amp;lt;database_oid&amp;gt;/&amp;lt;relfilenode&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;database_oid&lt;/code&gt; — unique database identifier (OID), retrieved from pg_database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;relfilenode&lt;/code&gt; — table file identifier, retrieved from pg_class&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Retrieve Identifiers
&lt;/h3&gt;

&lt;p&gt;Get the OID of the current database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# SELECT oid, datname FROM pg_database WHERE datname = current_database();
 oid | datname
-----+----------
   5 | postgres
(1 row)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the relfilenode of the tables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# SELECT relname, relfilenode FROM pg_class WHERE relname IN ('secure_data', 'plain_data');
   relname   | relfilenode
-------------+-------------
 secure_data |       16433
 plain_data  |       16442
(2 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, knowing oid = 5 and the relfilenode of the tables, we can locate the table files at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;data/db/base/5/16433&lt;/code&gt; — encrypted table secure_data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;data/db/base/5/16442&lt;/code&gt; — unencrypted table plain_data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Read Table Files
&lt;/h3&gt;

&lt;p&gt;Open a terminal and connect to the PostgreSQL container for bash commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it pg-tde bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read data from the unencrypted table (plain_data):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hexdump -C data/db/base/5/16442 | head -n 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output shows that the data is readable directly from the file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0bc20l16j27o2x1h88h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0bc20l16j27o2x1h88h.png" alt="PG_TDE - Docker - the data is readable directly from the file" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read data from the encrypted table (secure_data):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hexdump -C data/db/base/5/16433 | head -n 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output shows that the contents are fully encrypted:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bpugow8t905v9z1ve6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bpugow8t905v9z1ve6p.png" alt="PG_TDE - Docker - the contents are fully encrypted" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No readable strings — PG_TDE reliably encrypts data at the file system level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we installed PostgreSQL with Transparent Data Encryption (&lt;a href="https://docs.percona.com/pg-tde/index.html" rel="noopener noreferrer"&gt;PG_TDE&lt;/a&gt;) by Percona, created keys, encrypted a table, and verified that the data is truly protected during physical storage. PG_TDE by Percona provides transparent encryption: the application continues to work with the data as usual, while security is enforced at the file system level.&lt;/p&gt;

&lt;p&gt;If you're working with PostgreSQL and looking to strengthen your data protection, PG_TDE offers a great starting point. I invite you to experiment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create encrypted tables and perform a backup. Try restoring it on a different installation and see how key access is enforced.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore key rotation, switching the active key, or integrating alternative key providers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test how PG_TDE behaves in real-world scenarios: migration, failover, CI/CD pipelines&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading — and if you experiment with PG_TDE, I’d love to hear what you discover.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>security</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Monitoring, troubleshooting, and query analytics for PostgreSQL on Kubernetes</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Sun, 30 Jun 2024 17:31:07 +0000</pubDate>
      <link>https://dev.to/dbazhenov/monitoring-troubleshooting-and-query-analytics-for-postgresql-on-kubernetes-2onj</link>
      <guid>https://dev.to/dbazhenov/monitoring-troubleshooting-and-query-analytics-for-postgresql-on-kubernetes-2onj</guid>
      <description>&lt;p&gt;If you are learning about databases and Kubernetes or running or migrating PostgreSQL to Kubernetes, I would like to show you a great open-source tool for database monitoring and troubleshooting.&lt;/p&gt;

&lt;p&gt;I will discuss a tool to help you better understand your database, its parameters, and its health. You can access a Query Analytics tool to help you find slow queries. In addition, you will have dashboards to monitor the Kubernetes cluster itself. &lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/dbazhenov/running-pgadmin-to-manage-a-postgresql-cluster-in-kubernetes-616"&gt;the previous article&lt;/a&gt;, I discussed the pgAdmin and PostgreSQL cluster created using Percona Everest. Today, I installed &lt;a href="https://www.percona.com/open-source-database-monitoring-tools-for-mysql-mongodb-postgresql-more-percona" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt; in my cluster, made some test queries to the database using pgAdmin, and explored the dashboards. &lt;/p&gt;

&lt;p&gt;PMM is a free, open-source database monitoring tool for MySQL, PostgreSQL, and MongoDB. PMM has configured Grafana dashboards to monitor various PostgreSQL metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connections, Tuples, Transactions&lt;/li&gt;
&lt;li&gt;Checkpoints, Buffers, and WAL usage&lt;/li&gt;
&lt;li&gt;Blocks, Conflicts and Locks&lt;/li&gt;
&lt;li&gt;Disk Cache and Memory Size&lt;/li&gt;
&lt;li&gt;CPU, RAM, Disk IO &lt;/li&gt;
&lt;li&gt;Vacuum monitoring and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pgtowb2npwnba7xodhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pgtowb2npwnba7xodhj.png" alt="Monitoring PostgreSQL on Kubernetes using PMM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnv4xi0kserk9yrca3u7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnv4xi0kserk9yrca3u7.png" alt="PostgreSQL monitoring on Kubernetes using PMM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to install it (I'll show it below)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PMM Server, which includes dashboards and collects metrics from your databases. &lt;/li&gt;
&lt;li&gt;PMM Client for each of your databases that sends database metrics to PMM Server. You need to configure pg_stat_monitor or pg_stat_statements extensions for PostgreSQL.
If you use Percona Operator for PostgreSQL or Percona Everest, PMM is already integrated and enabled in the settings.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's get to the installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing PMM in a Kubernetes cluster.
&lt;/h2&gt;

&lt;p&gt;You can install the PMM server on any server or cluster; I use the same cluster where the database is installed.&lt;/p&gt;

&lt;p&gt;The documentation offers many &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/index.html" rel="noopener noreferrer"&gt;installation methods&lt;/a&gt;, such as using Docker, Podman, AWS, or HELM. &lt;/p&gt;

&lt;p&gt;I used the installation with HELM and the instructions from the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/helm.html" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a separate namespace or use an existing one. I create a separate one&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create namespace monitoring


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;I have installed HELM and the Percona repositories as per the documentation, and now install PMM using the commands:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo add percona https://percona.github.io/percona-helm-charts/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm install pmm -n monitoring \
--set service.type="ClusterIP" \
--set pmmResources.limits.memory="4Gi" \
--set pmmResources.limits.cpu="2" \
percona/pmm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I added parameters with resource limits for PMM since my test cluster has limited resources.&lt;/p&gt;

&lt;p&gt;The installation is quick, and I have the next steps. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8mzqds66izfnidqfr4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8mzqds66izfnidqfr4w.png" alt="PMM HELM installation on Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We need to get the administrator password created during installation. &lt;em&gt;(I just took that command from the last step.)&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get secret pmm-secret -n monitoring -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Let's do a port-forward for Pod with pmm to an available port on our laptop to open PMM in the browser. (&lt;em&gt;I used 8081 because 8080 is used for Percona Everest, which manages the database.&lt;/em&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl -n monitoring port-forward pmm-0 8081:80


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Opened PMM in a browser and used the password to log in.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ivz2olza97pwebtoc8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ivz2olza97pwebtoc8a.png" alt="PMM in a browser"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting the database to the PMM server
&lt;/h2&gt;

&lt;p&gt;Now that we have the PMM itself, we need to make our Postgres database pass metrics to it. I created the cluster using Percona Everest; however, you can connect any PostgreSQL cluster to PMM.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If you are not using Percona's Postgres, please refer to the documentation on installing &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/client/index.html" rel="noopener noreferrer"&gt;the PMM Client&lt;/a&gt; and &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/client/postgresql.html" rel="noopener noreferrer"&gt;Postgres extensions&lt;/a&gt; (&lt;a href="https://www.postgresql.org/docs/current/pgstatstatements.html" rel="noopener noreferrer"&gt;pg_stat_statements&lt;/a&gt; or &lt;a href="https://docs.percona.com/pg-stat-monitor/" rel="noopener noreferrer"&gt;pg_stat_monitor&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are using &lt;a href="https://docs.percona.com/postgresql/" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL&lt;/a&gt;, &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt;, or &lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;, then the necessary extensions are already installed. I will explain how to enable monitoring below. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Postgres database created using Percona Operator for PostgreSQL
&lt;/h3&gt;

&lt;p&gt;The setup process is described in sufficient detail in the &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/monitoring-tutorial.html#install-pmm-client" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, if briefly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You need to create an API key in the PMM settings. &lt;/li&gt;
&lt;li&gt;Specify the API key as the PMM_SERVER_KEY value in the deploy/secrets.yaml secrets file. Using the deploy/secrets.yaml file, create the Secrets object.&lt;/li&gt;
&lt;li&gt;Update the pmm section in the deploy/cr.yaml file.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  pmm:
    enabled: true
    image: percona/pmm-client:2.42.0
    secret: cluster1-pmm-secret
    serverHost: monitoring-service


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Apply the changes, and you will see the databases in PMM.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL cluster created using Percona Everest
&lt;/h3&gt;

&lt;p&gt;That's my way. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We need to get the IP address of the PMM in the Kubernetes cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get svc -n monitoring


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Now, in the Percona Everest settings, let's add a new Monitoring Endpoint using the IP address, user, and password from PMM.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fsj59ft7u9ywqhz9xry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fsj59ft7u9ywqhz9xry.png" alt="Percona Everest Monitoring Endpoints"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Let's edit the database and enable monitoring in the created endpoint.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fansojwud68ancexfk2kd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fansojwud68ancexfk2kd.png" alt="Percona Everest Monitoring for PostgreSQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Done; now we will see the metrics in PMM. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllqc7cgpcbmiy311ic6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllqc7cgpcbmiy311ic6u.png" alt="PMM Home Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing how it works.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open pgAdmin and make some complex queries.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I found a SQL query that generates random data in rows.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

INSERT INTO demo.LIBRARY(id, name, short_description, author,
                              description,content, last_updated, created)
SELECT id, 'name', md5(random()::text), 'name2'
      ,md5(random()::text),md5(random()::text)
      ,NOW() - '1 day'::INTERVAL * (RANDOM()::int * 100)
      ,NOW() - '1 day'::INTERVAL * (RANDOM()::int * 100 + 100)
FROM generate_series(1,1000) id;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And made several million rows by changing the value of &lt;code&gt;generate_series(1,1000)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I've also done various SELECT queries. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9e4yy6z84r89asql5vr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9e4yy6z84r89asql5vr.png" alt="pgAdmin test queries"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After that, I went to look at the dashboards, which immediately showed that I had a problem. I got a list of slow queries and a spike in the graph.
&lt;em&gt;I created a test table without indexes, and queries were already processing slowly on many rows. I did this purposely to see the result in the monitoring tool.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8qu6ojwt94uh2uodgo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8qu6ojwt94uh2uodgo9.png" alt="Monitoring PostgreSQL dashboard slow queries"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlntpxli83d28kmzvm23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlntpxli83d28kmzvm23.png" alt="Monitoring PostgreSQL dashboard queries, tuples, CPU, autovacuum"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also found a dashboard that shows the cluster resource utilization for each Pod, such as CPU and RAM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42dlstlt4ec7n13djs5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42dlstlt4ec7n13djs5t.png" alt="Monitoring PostgreSQL dashboard Kubernetes cluster metrics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;PMM has various dashboards for monitoring PostgreSQL. I won't show you all of them, but I recommend installing and monitoring your database, especially if you are not using database monitoring tools. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>postgres</category>
      <category>opensource</category>
      <category>database</category>
    </item>
    <item>
      <title>Running pgAdmin to Manage a PostgreSQL Cluster on Kubernetes</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Wed, 26 Jun 2024 18:14:16 +0000</pubDate>
      <link>https://dev.to/dbazhenov/running-pgadmin-to-manage-a-postgresql-cluster-in-kubernetes-616</link>
      <guid>https://dev.to/dbazhenov/running-pgadmin-to-manage-a-postgresql-cluster-in-kubernetes-616</guid>
      <description>&lt;p&gt;Let’s say you need to do something in PostgreSQL in Kubernetes, and it is inconvenient to work with the database in the terminal, or you need to become more familiar with PostgreSQL or SQL commands. In that case, this article comes in handy.&lt;/p&gt;

&lt;p&gt;I explain how to run pgAdmin in a Kubernetes cluster to manage PostgreSQL databases deployed in that cluster. This is useful if your PostgreSQL cluster does not have external access and is only available inside a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pgadmin.org/" rel="noopener noreferrer"&gt;pgAdmin&lt;/a&gt; is a popular open source tool for managing Postgres databases. It is convenient for creating databases, schemas, tables, viewing data, executing SQL queries, and much more in a user-friendly web interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgcasz62axh0kazua408.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgcasz62axh0kazua408.png" alt="pgAdmin PostgreSQL Kubernetes Welcome"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running pgAdmin
&lt;/h2&gt;

&lt;p&gt;We need one file that contains deployment and service resources to run pgAdmin in k8s. Let's call it pgadmin.yaml.&lt;/p&gt;

&lt;p&gt;pgadmin.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgadmin
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin
          image: dpage/pgadmin4
          ports:
            - containerPort: 80
          env:
            - name: PGADMIN_DEFAULT_EMAIL
              value: admin@example.com
            - name: PGADMIN_DEFAULT_PASSWORD
              value: admin
---
apiVersion: v1
kind: Service
metadata:
  name: pgadmin-service
spec:
  selector:
    app: pgadmin
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note that the middle of the file contains the login and password for logging into the pgAdmin panel.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

          env:
            - name: PGADMIN_DEFAULT_EMAIL
              value: admin@example.com
            - name: PGADMIN_DEFAULT_PASSWORD
              value: admin


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We need to deploy pgAdmin using this file by running the command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f pgadmin.yaml -n &amp;lt;namespace&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's print out the list of pods in our cluster to get the name pgadmin pod.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n &amp;lt;namespace&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, we just need to run port forwarding for the pgAdmin pod on a free port that will open in the browser. I use port 5050.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl port-forward pgadmin-deployment-***-*** 5050:80 -n &amp;lt;namespace&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kq7b1f39olia5d2c0u0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kq7b1f39olia5d2c0u0.jpg" alt="pgAdmin deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can open localhost:5050 in a browser now.&lt;/p&gt;

&lt;p&gt;After that, you only need to add a connection to Postgres by clicking Add New Server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0d6fesmyne41jsk1rvd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0d6fesmyne41jsk1rvd.png" alt="pgAdmin Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used the access data that &lt;a href="https://percona.community/projects/everest" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt; provided me for the Postgres cluster created using it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsiyccostcw7h8zn0vzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsiyccostcw7h8zn0vzt.png" alt="Percona Everest PostgreSQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd9w4w94w34cya04wrez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd9w4w94w34cya04wrez.png" alt="pgAdmin Add New Server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! We can manage Postgres using the pgAdmin interface or SQL queries. Viewing databases, schemas, and tables and writing SQL queries has become very convenient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg0xhu6qeww5b08krzd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg0xhu6qeww5b08krzd6.png" alt="pgAdmin k8s SQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhm5a8847cex6k25lzy5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhm5a8847cex6k25lzy5.png" alt="pgAdmin k8s Select"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Remove pgAdmin
&lt;/h2&gt;

&lt;p&gt;Once you are done with pgAdmin and PostgreSQL, all you need to do is to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Stop port-forwarding by simply pressing CTRL + C in the terminal. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove pgAdmin Deployment and Service by running the command. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzymj9qygh0su2l6p74e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzymj9qygh0su2l6p74e.png" alt="Remove pgAdmin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About the PostgreSQL cluster on Kubernetes used in the article
&lt;/h2&gt;

&lt;p&gt;In this post, I used the Postgres cluster with three nodes created on Google Kubernetes Engine (GKE) using &lt;a href="https://percona.community/projects/everest" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Percona Everest is an open source platform that allows you to provision and manage database clusters. I recently wrote about Percona Everest in &lt;a href="https://dev.to/dbazhenov/a-new-way-to-provision-databases-on-kubernetes-126h"&gt;A New Way to Provision Databases on Kubernetes&lt;/a&gt;. You do not need to worry about PostgreSQL cluster installation and configuration, backups, recovery, or scaling; you can do it in the interface. It enables multi-database and multi-cluster configurations and can be deployed on any Kubernetes infrastructure in the cloud or on-premises.&lt;/p&gt;

&lt;p&gt;Documentation: &lt;a href="https://docs.percona.com/everest/quickstart-guide/gke.html" rel="noopener noreferrer"&gt;Create Kubernetes cluster on Google Kubernetes Engine (GKE)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use any other cloud or create a local Kubernetes cluster using &lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt;, &lt;a href="https://k3d.io" rel="noopener noreferrer"&gt;k3d&lt;/a&gt; or &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>percona</category>
      <category>kubernetes</category>
      <category>postgres</category>
      <category>database</category>
    </item>
    <item>
      <title>A New Way to Provision Databases on Kubernetes</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Wed, 13 Mar 2024 07:59:13 +0000</pubDate>
      <link>https://dev.to/dbazhenov/a-new-way-to-provision-databases-on-kubernetes-126h</link>
      <guid>https://dev.to/dbazhenov/a-new-way-to-provision-databases-on-kubernetes-126h</guid>
      <description>&lt;p&gt;Need to provide database self-service to internal teams? Can't spare the resources to build and maintain your own private DBaaS solution? Sick of cloud DBaaS providers locking you in? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://percona.community/projects/everest/" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt; is a cloud-native database platform to deploy and manage enterprise-grade PostgreSQL, MongoDB and MySQL database clusters.&lt;/p&gt;

&lt;p&gt;With Percona Everest, you can:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable DBA teams to regain control over data and database configuration&lt;/li&gt;
&lt;li&gt;Empower developers to deploy code faster and self-service provision highly performant, production-ready database clusters&lt;/li&gt;
&lt;li&gt;Free your entire organization from vendor lock-in and restrictive subscriptions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Benefits of Percona Everest
&lt;/h2&gt;

&lt;p&gt;Truly &lt;strong&gt;open source&lt;/strong&gt;, Percona Everest offers the benefits of automated database provisioning and management without the need to build or maintain an in-house database management platform.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No data lock-in or restrictive contracts &lt;/li&gt;
&lt;li&gt;Reduced total cost of ownership (TCO)&lt;/li&gt;
&lt;li&gt;Complete data sovereignty to meet any compliance requirement &lt;/li&gt;
&lt;li&gt;Fully customizable database configurations&lt;/li&gt;
&lt;li&gt;Right-sized database deployments &lt;/li&gt;
&lt;li&gt;Frictionless developer self-service &lt;/li&gt;
&lt;li&gt;Backing of highly skilled database management experts &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key features of Percona Everest
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Database management via a single pane of glass&lt;/strong&gt;&lt;br&gt;
Percona Everest enables you to manage complex database environments through a single pane of glass or API. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create, configure, deploy, update, and upgrade with zero downtime&lt;/li&gt;
&lt;li&gt;Backup, restore, restart, suspend/resume, or delete database clusters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyti18ud15bq2kjub367d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyti18ud15bq2kjub367d.png" alt="Percona Everest - Database Management"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complete customization&lt;/strong&gt;&lt;br&gt;
Customize load balancing settings, settings for network exposure, and resource settings like node count, instance type, CPU, memory, storage, etc. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standardized database deployments&lt;/strong&gt; &lt;br&gt;
Create ready-to-use database clusters of enterprise-grade MySQL, PostgreSQL, and MongoDB, with the ability to customize to support a variety of topologies or deployments.&lt;/p&gt;

&lt;p&gt;Percona Everest leverages &lt;a href="https://www.percona.com/software/percona-operators" rel="noopener noreferrer"&gt;Percona Operators&lt;/a&gt; to deploy Cloud-Native Percona Distributions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-in observability tools&lt;/strong&gt;&lt;br&gt;
Benefit from built-in open source infrastructure monitoring to keep control of (and optimize) resource usage, health, and performance of database clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Percona Everest is designed
&lt;/h2&gt;

&lt;p&gt;Percona Everest has two primary components&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Percona Everest application with the UI&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the frontend, there is a Percona Everest UI which is developed using the Vite framework, React library and TypeScript language.&lt;/li&gt;
&lt;li&gt;On the backend, requests from the frontend app are processed by Percona Everest Backend. It is an API developed in Golang using the Echo framework that sends requests to the Kubernetes API. You can also use the API directly for your needs. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Percona Everest CLI (everestctl)&lt;/strong&gt;&lt;br&gt;
You can install Percona Everest operators and components to your Kubernetes Cluster with the help of a console tool Percona Everest CLI (everestctl). Then, Percona Everest will create and manage database clusters. Everestctl is developed in Golang, and it is a built-in executable file.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Percona Everest does not supply a Kubernetes cluster; you’ll need to use your own for the deployment.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Percona Everest is currently in Beta
&lt;/h2&gt;

&lt;p&gt;We invite you to become an early adopter and contribute to progress!&lt;/p&gt;

&lt;p&gt;Your feedback is crucial to enhancing the software, and we highly value and rely on your input.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://percona.community/projects/everest/" rel="noopener noreferrer"&gt;Percona Everest website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/everest/quickstart-guide/qs-overview.html" rel="noopener noreferrer"&gt;Quick Start&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>database</category>
      <category>opensource</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>How to develop serverless PHP application with PostgreSQL database with Vercel and Neon.tech for free</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Mon, 20 Nov 2023 15:25:39 +0000</pubDate>
      <link>https://dev.to/dbazhenov/how-to-develop-serverless-php-application-with-postgresql-database-with-vercel-and-neontech-for-free-dgl</link>
      <guid>https://dev.to/dbazhenov/how-to-develop-serverless-php-application-with-postgresql-database-with-vercel-and-neontech-for-free-dgl</guid>
      <description>&lt;p&gt;In this post, I will describe my experience with building a serverless PHP application for free. I also used PostgreSQL, PHP Composer, Vercel, and Neon.tech. This information might be helpful for students, novice developers, and even experienced developers trying to develop a serverless application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I think this topic is important.&lt;/strong&gt; Usually, to develop a PHP application, you need some kind of server; you need to set up a web server, connect a domain, set up an SSL certificate, and configure and maintain all of that. Serverless saves a lot of time and energy.&lt;/p&gt;

&lt;p&gt;I investigated methods to develop and deploy a PHP application and PostgreSQL database for free.&lt;/p&gt;

&lt;p&gt;I had requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is free, and no credit card is required. &lt;/li&gt;
&lt;li&gt;Sufficient resources for the application, several gigabytes of data, and thousands of application runs per month.&lt;/li&gt;
&lt;li&gt;HTTPS and the ability to connect my own domain.&lt;/li&gt;
&lt;li&gt;Easy start for an untrained user (me).&lt;/li&gt;
&lt;li&gt;PostgreSQL, PHP, and composer support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, I found a pretty popular platform called &lt;a href="https://vercel.com/pricing" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt;, which could deploy PHP applications, too. I'd be glad if you could suggest a similar alternative in the comments. &lt;/p&gt;

&lt;p&gt;With the database, there was more choice; as a database, I wanted to use PostgreSQL or compatible with it, and as a result, I found several services providing it for free:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.cockroachlabs.com/pricing/" rel="noopener noreferrer"&gt;CockroachDB&lt;/a&gt; - 10 GiB storage&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://xata.io/pricing" rel="noopener noreferrer"&gt;Xata.io&lt;/a&gt; - 15 GB storage&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aiven.io/pricing?product=pg" rel="noopener noreferrer"&gt;Aiven.io&lt;/a&gt; - 5 GB storage&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://neon.tech/pricing" rel="noopener noreferrer"&gt;Neon.tech&lt;/a&gt; - 3 GiB storage per branch &amp;amp; 10 branches. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I only chose for a short time, deciding to try all services, and the first would be Neon.tech. I also found Neon.tech in the Vercel marketplace app list, which put me in the mood for my first experiment.&lt;/p&gt;

&lt;p&gt;So, I will tell you how to make an application, deploy it to Vercel, and connect Neon.tech PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparation
&lt;/h2&gt;

&lt;p&gt;I signed up for &lt;a href="https://vercel.com" rel="noopener noreferrer"&gt;the Vercel&lt;/a&gt; service at the Hobby plan, did my first project and installed &lt;a href="https://vercel.com/docs/cli" rel="noopener noreferrer"&gt;the Vercel CLI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9ziaud85s0kdqzn46wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9ziaud85s0kdqzn46wc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I registered at &lt;a href="https://neon.tech/pricing" rel="noopener noreferrer"&gt;Neon.tech&lt;/a&gt; and created my first free database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx7qe965xujt9k309jcu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx7qe965xujt9k309jcu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Application development and deployment
&lt;/h2&gt;

&lt;p&gt;Vercel community provides two great resources on how to make a PHP application with tons of examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/vercel-community/php" rel="noopener noreferrer"&gt;PHP Runtime for Vercel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/juicyfx/vercel-examples" rel="noopener noreferrer"&gt;vercel-examples&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We need to create a &lt;code&gt;vercel.json&lt;/code&gt; file in the project folder. I just made an empty folder for the project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "functions": {
    "api/*.php": {
      "runtime": "vercel-php@0.6.0"
    }
  },
  "routes": [
    { "src": "/(.*)",  "dest": "/api/index.php" }
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a simple php file &lt;code&gt;index.php&lt;/code&gt; for the test in a new folder named &lt;code&gt;api&lt;/code&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php
phpinfo();


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make the application deployment with the command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vercel --prod&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first time you will need to log in and do a dev deployment.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Log in
vercel login

# Let's fly
vercel


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That's it, a few seconds, and you can open a deployment domain or a production domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dyaip5gy6ujfau2ffzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dyaip5gy6ujfau2ffzj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's connect the database
&lt;/h2&gt;

&lt;p&gt;On the dashboard of our database in Neon we can get the connection data, this looks like a string:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

psql 'postgresql://daniil.bazhenov:************@ep-shrill-disk-******.eu-central-1.aws.neon.tech/php-neon?sslmode=require'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I suggest immediately adding the access data as environment variables in the &lt;code&gt;vercel.json&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;I added a section &lt;code&gt;env&lt;/code&gt;, but you can also add variables directly in the code or vercel project page.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "functions": {
    "api/*.php": {
      "runtime": "vercel-php@0.6.0"
    }
  },
  "routes": [
    { "src": "/(.*)",  "dest": "/api/index.php" }
  ],
  "env": {
    "PG_HOST": "ep-shrill-disk-******.eu-central-1.aws.neon.tech",
    "PG_PORT": "5432",
    "PG_DB": "php-neon",
    "PG_USER": "daniil.bazhenov",
    "PG_PASSWORD": "**********",
    "PG_ENDPOINT": "ep-shrill-disk-******"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let's modify the application to connect to PostgreSQL using our data from environment variables.&lt;/p&gt;

&lt;p&gt;Let's open the php file api.index.php and modify it as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the environment variables&lt;/li&gt;
&lt;li&gt;Make a connection string for pg_connect&lt;/li&gt;
&lt;li&gt;Check the connection&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The example is courtesy of ChatGPT; you can use any other method.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

$host = $_ENV['PG_HOST'];
$port = $_ENV['PG_PORT'];
$db = $_ENV['PG_DB'];
$user = $_ENV['PG_USER'];
$password = $_ENV['PG_PASSWORD'];
$endpoint = $_ENV['PG_ENDPOINT'];

$connection_string = "host=" . $host . " port=" . $port . " dbname=" . $db . " user=" . $user . " password=" . $password . " options='endpoint=" . $endpoint . "' sslmode=require";

$dbconn = pg_connect($connection_string);

if (!$dbconn) {
    die("Connection failed: " . pg_last_error());
}
echo "Connected successfully";


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's run the deployment to production.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vercel --prod&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We'll see the result in the console.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

➜  start vercel --prod
Vercel CLI 32.5.0
🔍  Inspect: https://vercel.com/dbazhenovs-projects/start/224sUDqro9hi5gfne8oL9F5FaPkM [4s]
✅  Production: https://start-bzq80bkt8-dbazhenovs-projects.vercel.app [4s]
➜  start


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And we can open the main domain in the browser to see the connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf5aoeiqntdgcdouy1xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf5aoeiqntdgcdouy1xd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's check the Composer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://getcomposer.org/" rel="noopener noreferrer"&gt;Composer&lt;/a&gt; is a PHP package manager and it will allow us to use many out-of-the-box packages and features such as HTTP requests, logging, and templates and database management. I use it to develop large, complex applications.&lt;/p&gt;

&lt;p&gt;For testing, I'll connect &lt;a href="https://packagist.org/packages/guzzlehttp/guzzle" rel="noopener noreferrer"&gt;Guzzle&lt;/a&gt;, make a GET request to the GitHub API, and connect &lt;a href="https://packagist.org/packages/larapack/dd" rel="noopener noreferrer"&gt;dd&lt;/a&gt; to print the result of the request in a readable form for development and debugging. &lt;/p&gt;

&lt;p&gt;So, my application can request external APIs, save the results to a PostgreSQL database, and do other things like return some results to the user.&lt;/p&gt;

&lt;p&gt;You must create a composer.json file in the project directory and add the necessary packages. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "require": {
        "guzzlehttp/guzzle": "^7.0",
        "larapack/dd": "1.*"
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;At the beginning of the index.php file let's add the composer packages connection, make a test query with Guzzle and print the result with dd()&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

require_once __DIR__ . '/../vendor/autoload.php';

$client = new \GuzzleHttp\Client();

$response = $client-&amp;gt;request('GET', 'https://api.github.com/repos/guzzle/guzzle');

$result = json_decode($response-&amp;gt;getBody(), true); 

dd($result);


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As usual, we do the deployment &lt;br&gt;
&lt;code&gt;vercel --prod&lt;/code&gt;&lt;br&gt;
and in less than a minute we see the result in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz90os8pheebk278acutf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz90os8pheebk278acutf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is a fantastic technology. &lt;/p&gt;

&lt;p&gt;You can make a website or backend API without worrying about server, hosting, web server configuration, and domains.&lt;/p&gt;

&lt;p&gt;You can use the database for free as well, and you have several options for the database.&lt;/p&gt;

&lt;p&gt;What to do next:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Develop the application and add features.&lt;/li&gt;
&lt;li&gt;Develop the database, add tables schema, and make queries to insert and select data.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>php</category>
      <category>postgres</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How Database Indexes Affect MongoDB and Application Performance</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Thu, 04 May 2023 09:50:46 +0000</pubDate>
      <link>https://dev.to/dbazhenov/how-database-indexes-affect-mongodb-and-application-performance-5667</link>
      <guid>https://dev.to/dbazhenov/how-database-indexes-affect-mongodb-and-application-performance-5667</guid>
      <description>&lt;p&gt;I will show with real examples and graphs how indexes can affect the performance of the database and your application.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was inspired by a lot of feedback from my friends saying that MongoDB is slow or crashing. Indeed, when I started working with MongoDB, I noticed very quickly that it starts working slowly, although there were few documents in the database. MySQL was faster in the same situation during development with no load. I just didn't add any indexes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As an experiment, I will run an application that will make lots of queries to different collections in the database. In the experiment, I will show that it can run fast and slow and even crash entirely because of our mistakes, just like any tool. At the beginning of the experiment, all queries are executed using indexes. Then I'll remove the index for a while, show how it affects it. And I'll create it again to restore the performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the application and the database
&lt;/h2&gt;

&lt;p&gt;The application is developed in PHP 8 and uses the default MongoDB driver for database queries.&lt;/p&gt;

&lt;p&gt;The application is deployed on a t2.micro (Free Tier) instance in AWS, with 1 vCPU and 1 GiB of Memory. &lt;/p&gt;

&lt;p&gt;The application makes queries to several collections in a loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pages - 3.2 million documents&lt;/li&gt;
&lt;li&gt;Users - 130k documents&lt;/li&gt;
&lt;li&gt;Docs - 100k documents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Percona Server for MongoDB is used as the database. MongoDB ReplicaSet consists of three nodes also deployed in AWS, each node on a separate t2.micro instance.&lt;/p&gt;

&lt;p&gt;I use the free, open-source tool &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt; for monitoring and graphing. &lt;/p&gt;

&lt;p&gt;These limited resources are available to everyone for free, and you will see how much you can get out of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynms1ue04k0w0o0ah56v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynms1ue04k0w0o0ah56v.png" alt="Instances in AWS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's start the experiment
&lt;/h2&gt;

&lt;p&gt;I ran the application to do the load on the database. All queries in this experiment were performed to the Primary node to simplify the demonstration. And also in the final section of the article I will make an experiment with the load on all three nodes.&lt;/p&gt;

&lt;p&gt;The loop performed several FindOne queries&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get a document from a collection sorted by timestamp&lt;/li&gt;
&lt;li&gt;Get an random document by id.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Queries were run against two different collections (with 3 million and 100k documents)&lt;/p&gt;

&lt;p&gt;Example code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   $last_user = $app['db']-&amp;gt;users-&amp;gt;findOne([],
   [
       'sort' =&amp;gt; [
           'timestamp' =&amp;gt; -1
       ]
   ]);

   $user_id = rand(1, $last_user['user_id']);

   $user_data = $app['db']-&amp;gt;users-&amp;gt;findOne(
   [
       'user_id' =&amp;gt; $user_id
   ]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I ran one process, I got about 1,000 queries per second and 30% of the CPU load of the database instance.&lt;/p&gt;

&lt;p&gt;Then I started the second similar process. I got about 1.8k queries per second and just over 50% CPU load.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmikpgb83f21zzdcqdf5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmikpgb83f21zzdcqdf5c.png" alt="Running processes with database queries"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flymeg8jbq41po2elwch6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flymeg8jbq41po2elwch6.png" alt="CPU load with two processes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was shocked by these results because these are very big numbers. For example, if you develop a website, you can get hundreds of requests per second (RPS) from online users on these resources and store millions of rows (documents) in a database with a disk of less than 10GB.&lt;/p&gt;

&lt;p&gt;Queries to the database were very fast. About 2-100 ms to a collection with 100k documents and 3-300 ms to a table with 3 million documents. Not enough resources for 3 million documents, but it kept running at a speed acceptable to a live user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqq16kolsjxj6vd9s6dz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqq16kolsjxj6vd9s6dz.png" alt="Queries in progress and execution time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's delete the index in the collection with 100k documents
&lt;/h2&gt;

&lt;p&gt;I opened &lt;a href="https://www.mongodb.com/products/compass" rel="noopener noreferrer"&gt;MongoDB Compass&lt;/a&gt; and removed the index on the timestamp field. One of the queries was sorting the collection by this field.&lt;/p&gt;

&lt;p&gt;Performance has degraded dramatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The number of queries has dropped from 1.8k to 120 per second. &lt;/li&gt;
&lt;li&gt;The load on the CPU has increased to 90%, to the limit.&lt;/li&gt;
&lt;li&gt;The query time went from 100ms to 20+ seconds on average.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Yes, the app continued to work. If you don't have many users while the app is in development, you probably won't even notice it. But the app was very bad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib1u4ljrgnb7spzrubm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib1u4ljrgnb7spzrubm8.png" alt="Deleting and restoring MongoDB index - Operations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3bufnpaopddvhy0hwyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3bufnpaopddvhy0hwyt.png" alt="Deleting and restoring MongoDB index - CPU"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspsm484pldxl6xxxf6qq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspsm484pldxl6xxxf6qq.png" alt="Deleting and restoring MongoDB index - QAN"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always create indexes for all fields that are used in search and sorting.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's create the index
&lt;/h2&gt;

&lt;p&gt;I just created an index that I deleted earlier.&lt;/p&gt;

&lt;p&gt;Performance, CPU utilization, and execution time were immediately restored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfspnjz6ibrmm15w0zmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfspnjz6ibrmm15w0zmb.png" alt="MongoDB Compass - Crating an index"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now delete the index in the collection of 3 million documents
&lt;/h2&gt;

&lt;p&gt;Performance has dropped to less than one operation per second.&lt;/p&gt;

&lt;p&gt;The Primary node died after about 30 seconds. Then I lost the connection to the second Secondary node, after which the PHP application ended with an error. &lt;br&gt;
It took me about 10 minutes to recover the instances, and I didn't repeat the experiment.&lt;/p&gt;

&lt;p&gt;It is gratifying that simple Reboot instances through AWS control panel automatically started all database nodes; they connected to the monitoring and continued to work.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion one
&lt;/h2&gt;

&lt;p&gt;Always add indexes.&lt;/p&gt;

&lt;p&gt;To learn more about them, I recommend taking the excellent MongoDB performance course from MongoDB: &lt;br&gt;
&lt;a href="https://learn.mongodb.com/courses/m201-mongodb-performance" rel="noopener noreferrer"&gt;M201: MongoDB Performance&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Let's try to use all the ReplicaSet nodes
&lt;/h2&gt;

&lt;p&gt;In the experiment, I only queried the Primary node, but I have a ReplicaSet with three nodes.&lt;/p&gt;

&lt;p&gt;I set &lt;a href="https://www.mongodb.com/docs/manual/core/read-preference/#read-preference" rel="noopener noreferrer"&gt;Read Preference&lt;/a&gt; for read operations from Secondary Nodes.&lt;/p&gt;

&lt;p&gt;I just used the parameter when initializing the database client in my application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 'readPreference' =&amp;gt; 'secondaryPreferred'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, the initialization looks like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$app['db_client'] = new \MongoDB\Client(MONGODB_URLS,
   [
       'username' =&amp;gt; MONGODB_USER,
       'password' =&amp;gt; MONGODB_PASSWORD,
       'replicaSet' =&amp;gt; 'MongoDB-RS',
       'authSource' =&amp;gt; 'admin',
       'readPreference' =&amp;gt; 'secondaryPreferred'
   ],
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, all write operations are automatically executed in Primary, and read operations in Secondary nodes.&lt;/p&gt;

&lt;p&gt;So, without increasing the number of resources, using t2.micro instances I got:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;1k ops/sec read operations on each node;&lt;/li&gt;
&lt;li&gt;over 500 ops/s of writes and updates (insert, update, delete).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The CPUs of each instance were less than 50% loaded, and I just didn't need more performance for my application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdmlx0laed9c52odznk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdmlx0laed9c52odznk6.png" alt="MongoDB Read Preference on three nodes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I repeated the experiment with deleting the index in the collection where the write was done. I had another Docs collection with 100k+ documents. Then I restored the index.&lt;/p&gt;

&lt;p&gt;And also got a lot of performance degradation. &lt;/p&gt;

&lt;p&gt;Then I loaded the Primary node a little bit more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcn8627f5pb42c7xqresl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcn8627f5pb42c7xqresl.png" alt="Load on Primary node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion two
&lt;/h2&gt;

&lt;p&gt;Think about performance, experiment, and keep an eye on monitoring.&lt;/p&gt;

&lt;p&gt;It's not complicated or time-consuming, and various free tools are available now.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>database</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to add authentication to HTTP requests and work with Rate limits using GitHub REST API example.</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Thu, 13 Apr 2023 17:24:44 +0000</pubDate>
      <link>https://dev.to/dbazhenov/how-to-add-authentication-to-http-requests-and-work-with-rate-limits-using-github-rest-api-example-37f0</link>
      <guid>https://dev.to/dbazhenov/how-to-add-authentication-to-http-requests-and-work-with-rate-limits-using-github-rest-api-example-37f0</guid>
      <description>&lt;p&gt;I suggest that we dive deeper into HTTP requests to external APIs using the example of the GitHub REST API and a simple PHP script.&lt;/p&gt;

&lt;p&gt;In past posts, we learned how to do a lot of requests in a loop and ran into limitations on the GitHub API side.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"message": "API rate limit exceeded for *.*.*.*. 
(But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)", "documentation_url":"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API tells us the cause of the problem, but this is not always the case. More often we will have to guess on our own. It explicitly states that we need to read about &lt;a href="https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limiting" rel="noopener noreferrer"&gt;rate limits&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After reading the documentation, I realize that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We made requests as unauthenticated users. We did not use login/pass or token. &lt;/li&gt;
&lt;li&gt;The API has rate limits and they may be different for different requests and authentication methods. &lt;/li&gt;
&lt;li&gt;The current limits can be seen in the HTTP headers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GitHub limits non-authenticated users to 60 requests per hour, and we did more than 10 in a few seconds. But if we are an authenticated user, we can do 5,000 or even 15,000 requests per hour.&lt;/p&gt;

&lt;p&gt;So we have the following plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add authentication to API requests.&lt;/li&gt;
&lt;li&gt;Add a function to read headers containing information about limits, so as not to exceed the limits. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Let's look at the documentation and the available authentication methods
&lt;/h2&gt;

&lt;p&gt;I explored the available authentication methods for the GitHub REST API on &lt;a href="https://docs.github.com/en/rest/overview/authenticating-to-the-rest-api" rel="noopener noreferrer"&gt;the Authenticating to the REST API page&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I found that we need to get a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="noopener noreferrer"&gt;special token&lt;/a&gt; and add it to the headers.&lt;/p&gt;

&lt;p&gt;So we've looked at the limits and we have a plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make a token.&lt;/li&gt;
&lt;li&gt;Add the token to the headers.&lt;/li&gt;
&lt;li&gt;Have fun!&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Let's add authentication with a personal access token
&lt;/h2&gt;

&lt;p&gt;You need to find the personal token creation page in your GitHub profile settings and create it, &lt;a href="https://github.com/settings/tokens" rel="noopener noreferrer"&gt;this page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Save Token! :) &lt;/p&gt;

&lt;p&gt;Stop our application if it is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to pass our Token to the application, we use the environment variables in the docker-compose.yml file to do this.&lt;/p&gt;

&lt;p&gt;Let's add a new environment variable GITHUB_TOKEN to the php-fpm service, because you will need it in your PHP code. Token is private information not to be shared with anyone.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;docker-compose.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  php-fpm:
    image: php8.2-fpm-mongo
    volumes:
      - ./app:/var/www/html
    environment:
      DB_USERNAME: root
      DB_PASSWORD: secret
      DB_HOST: mongodb
      GITHUB_TOKEN: my_secret_token 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can run our application with the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now our TOKEN will be available in PHP to use anywhere. We need to replace our simple Guzzle initialization with a new one using Token. Let's do this right after initializing the database in app/init.php.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/init.php&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (isset($local_conf['GITHUB_TOKEN'])) {

    define('GITHUB_TOKEN', $local_conf['GITHUB_TOKEN']);

    $app['http'] = new \GuzzleHttp\Client(
        ['headers' =&amp;gt; 
            [
               'Authorization' =&amp;gt; 'Bearer ' . GITHUB_TOKEN
            ]
        ]
    );

} else {
    $app['http'] = new \GuzzleHttp\Client();  
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can notice that I made two modes. If we have Token in the environment variables we will do authentication, if not we will do requests without authentication.&lt;/p&gt;

&lt;p&gt;Let's make a test request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwqh03gibg6x5q31m4n5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwqh03gibg6x5q31m4n5.png" alt="authentication"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Last time we were able to make only 9 requests and hit the limits. Now we have 30 requests and we have 770 repositories in the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTP headers and development of logic for dealing with limits
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limit-headers" rel="noopener noreferrer"&gt;The Rate limit headers&lt;/a&gt; section in the documentation tells us about the headers that contain information about limits. &lt;/p&gt;

&lt;p&gt;If we can read these headers, we can design a function to not exceed these limits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhub02j8q2ugptel0wm10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhub02j8q2ugptel0wm10.png" alt="HTTP headers and development of logic for dealing with limits"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the function &lt;code&gt;fn_github_api_request&lt;/code&gt; which executes HTTP requests and add a function to read the headers and print them.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/func/github.php&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        $response = $app['http']-&amp;gt;request($method, $url , [
            'query' =&amp;gt; $params
        ]);              

        $headers = $response-&amp;gt;getHeaders();

        dd($headers);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, we will see the headers and their contents. &lt;br&gt;
We will also see how many requests we have left until the next limit update.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy56ba8nw7zgiazo3tosr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy56ba8nw7zgiazo3tosr.png" alt="HTTP headers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let's work the logic out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After each request, we will save the value of X-RateLimit-Remaining and X-RateLimit-Reset&lt;/li&gt;
&lt;li&gt;Before each request, we will check if we have information about the limits from the last request, this value, and if it is 0. Then we will wait for a number of seconds before resetting.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I added two new functions, setting limit information and checking limit information.&lt;br&gt;
&lt;em&gt;app/func/github.php&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function fn_github_api_request_limits_set(&amp;amp;$app, $response)
{
    $headers = $response-&amp;gt;getHeaders();

    $app['github_http']['limits']['remaining'] = (int) $headers['X-RateLimit-Remaining'][0];
    $app['github_http']['limits']['reset'] = (int) $headers['X-RateLimit-Reset'][0];

}

function fn_github_api_request_limits_check($app)
{

    if (isset($app['github_http']['limits'])) {

        $remaining = $app['github_http']['limits']['remaining'];

        if ($remaining == 0) {
            $reset = $app['github_http']['limits']['reset'] - time();

            fn_print_progress($app, 'Github API X-RateLimits will be reset in ' . $reset . ' sec.', true);
            sleep($reset+1);
        }
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also added the useful function &lt;code&gt;fn_print_progress&lt;/code&gt;, I added it to the file &lt;code&gt;app/func/common.php&lt;/code&gt;, for general useful functions.&lt;/p&gt;

&lt;p&gt;This function prints the result of the application in a convenient form.&lt;/p&gt;

&lt;p&gt;Now I can see the limits and the time in microseconds and seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmve70mcm4xnzut4sn8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmve70mcm4xnzut4sn8r.png" alt="the limits and the time in microseconds and seconds"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After playing around a bit with the code with and without Token, with the &lt;code&gt;if ($remaining &amp;lt; 2)&lt;/code&gt; setting, I checked that everything works fine for the current stage and the sleep function works as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz8nmelfk8ap85r45asb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz8nmelfk8ap85r45asb.png" alt="the sleep function works as needed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus functions
&lt;/h2&gt;

&lt;p&gt;In the process of development before the current one, I added some useful features. I will not publish them here as it takes a lot of code. You can find them in &lt;a href="https://github.com/dbazhenov/nginx-php-mongodb-docker-compose/tree/tutorial_3_api_rate_limits" rel="noopener noreferrer"&gt;the repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are just useful functions to display information while the script is running and when it's finished.&lt;/p&gt;

&lt;p&gt;You may notice that running a script that has run dozens of requests and saved hundreds of repositories to the database consumes only 2-3 MB of RAM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyurmfqesl3gd6owmlb2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyurmfqesl3gd6owmlb2u.png" alt="Bonus functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is important when I run long-running scripts that can make thousands of queries, if I save the data in an array, it can get bigger and the script will crash because of lack of allocated memory.&lt;/p&gt;

&lt;p&gt;We now have a feature that will let you know the amount of memory consumed at any time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have learned how to do authentication and perform an unlimited number of requests, taking into account the limits.&lt;/p&gt;

&lt;p&gt;We've added some useful features and more logic. &lt;/p&gt;

&lt;p&gt;Now, we can get more data from the API and even do it in multiple threads by running the scripts several times as separate processes from the console. We'll learn how to do this next time.&lt;/p&gt;

&lt;p&gt;The code for the current state of the project is available in &lt;a href="https://github.com/dbazhenov/nginx-php-mongodb-docker-compose/tree/tutorial_3_api_rate_limits" rel="noopener noreferrer"&gt;the repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>php</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Story of One Mistake: How a Database Monitoring Tool Can Help a Developer</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Fri, 07 Apr 2023 16:40:33 +0000</pubDate>
      <link>https://dev.to/dbazhenov/the-story-of-one-mistake-how-a-database-monitoring-tool-can-help-a-developer-j8c</link>
      <guid>https://dev.to/dbazhenov/the-story-of-one-mistake-how-a-database-monitoring-tool-can-help-a-developer-j8c</guid>
      <description>&lt;p&gt;I will tell you the real story of using database monitoring tools when developing an application. I will show you an example of how I managed to detect and fix a problem in the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqaysy0yvdzo970rga7oo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqaysy0yvdzo970rga7oo.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About the app and the process
&lt;/h2&gt;

&lt;p&gt;I am developing a PHP application using MongoDB as a database. The application is lightweight, and most load falls on the database. I have implemented functions at the application level to adjust the number of queries, as the application can quickly load the database to 100%. &lt;/p&gt;

&lt;p&gt;For development, I use several small dev instances in AWS, use Percona Server for MongoDB with three nodes as a database, and have PMM installed to monitor the databases. &lt;/p&gt;

&lt;p&gt;As a result, my development process consists of the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I developed a new feature and ran it on the dev server for testing.&lt;/li&gt;
&lt;li&gt;I check the profiling on the PHP side, and there is no memory leak, and I am happy with the speed.&lt;/li&gt;
&lt;li&gt;I check the database monitoring to ensure everything works fine. &lt;/li&gt;
&lt;li&gt;I debug the feature, setting the number and types of queries in the function to balance the number of queries and the load on the database, if necessary. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How I added new functionality to the app
&lt;/h2&gt;

&lt;p&gt;So I started the application and got ready to run the new feature. The feature was getting information from open sources, processing it, and saving it to the database. The second part of the functionality went through all the saved documents and did some additional processing. &lt;/p&gt;

&lt;p&gt;At this point, the application already had a lot of features that loaded the CPU of the Primary Node by 25-40%, and everything was running stably. I decided to have a performance reserve, as I planned to add new features.&lt;/p&gt;

&lt;p&gt;I checked several dashboards, and there were no anomalies or changes. PMM has many dashboards and charts, and I will only show a few, just some.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa19341rx0g044bb1ijch.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa19341rx0g044bb1ijch.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I saved the changes with the new feature and pushed it to the dev server to make it work. Then I checked that the function started without errors, and the result was visible in the database. I use MongoDB Compass to check the result of a database entry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Something has gone differently than planned.
&lt;/h2&gt;

&lt;p&gt;I waited a few minutes and rechecked the dashboard. At first glance, the main screen was fine. However, I was alarmed by the speed of processing. The number of operations has mostly stayed the same.&lt;/p&gt;

&lt;p&gt;I scrolled down through the various charts on the dashboard and saw an anomaly.&lt;/p&gt;

&lt;p&gt;The latency increased, and the app loaded the instance to 100% CPU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpx41f4op98s5key4v0dl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpx41f4op98s5key4v0dl.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzorytmdwht31jdqm4l4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzorytmdwht31jdqm4l4.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have made a test run on the application side and checked the profiler there, too. The app worked poorly, and queries were slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding the cause of the problem
&lt;/h2&gt;

&lt;p&gt;I knew the reason was the new feature and immediately rolled back the last changes.&lt;/p&gt;

&lt;p&gt;I had a rough idea of where the problem might be, made a few changes, and started again.&lt;/p&gt;

&lt;p&gt;I did it several times, but the result was the same (the CPU was loaded at 100%).&lt;/p&gt;

&lt;p&gt;I selected a period with a load and used the Query Analytics function built into the monitoring.&lt;br&gt;
Query Analytics shows a list of queries sorted by load or execution speed. Some of the queries to the Pages collection gave 90% load, and the Query Time was more than 3 minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp0upzul5sctjxzbr7nk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp0upzul5sctjxzbr7nk.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Query Analytics, you can find slow queries, see their details, and then debug them in the application.&lt;/p&gt;
&lt;h2&gt;
  
  
  Fixing the problem
&lt;/h2&gt;

&lt;p&gt;I made a few changes that fixed the problem. &lt;/p&gt;

&lt;p&gt;The first problem was the indexes. I create indexes from within the application using the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$app['db']-&amp;gt;CollectionName-&amp;gt;createIndex(['index_key' =&amp;gt; 1]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the application uses many different collections and queries with conditions on various fields and with or without sorting, I have a lot of indexes.&lt;/p&gt;

&lt;p&gt;I made a typo in this case, and the index was not created correctly.&lt;/p&gt;

&lt;p&gt;After the indexes were created correctly, I needed quick runs to debug the number of queries to adjust the CPU load to around 50%.&lt;/p&gt;

&lt;p&gt;You can see the final chart after debugging and fixing the problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F085kvburcus8c2veqmah.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F085kvburcus8c2veqmah.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhas3u88vomfofwlae7jg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhas3u88vomfofwlae7jg.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About the monitoring tool
&lt;/h2&gt;

&lt;p&gt;I used the free, open-source database monitoring &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt;. This is a professional tool based on &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;It is enough to install locally or on a server PMM Server and PMM Client on each database instance. &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/index.html#set-up-pmm-client" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It also takes 60-360 minutes to get used to it, as it has many features and dashboards. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Don't forget to add indexes and make sure they work.&lt;/p&gt;

&lt;p&gt;I am a simple developer who can make mistakes and do different experiments. Installing the monitoring was one of the experiments, and previously I just focused on the speed of the PHP script. From time to time, I have looked at the monitoring dashboard in the AWS control panel, but it gives less information, only about the instance itself, without being able to investigate in detail.&lt;/p&gt;

&lt;p&gt;If you use MySQL, PostgreSQL or MongoDB, try installing PMM and see how your database works. Installing PMM for development purposes took a limited amount of resources and was straightforward. It benefited me because I discovered and fixed the problem before it went into production.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A small clarification, the story is real from my development practice, but for the article I took the graphs of the final debugging, so that the graphs show the right sequence and fit into the image available for explanation and demonstration. Just in reality I went to drink coffee several times and thought for a long time, what was reflected in the graphs of monitoring :)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrcidgb4cp175spww66f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrcidgb4cp175spww66f.jpg" alt="Percona Monitoring and Management (PMM)"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>database</category>
      <category>mongodb</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How To Optimize the Code Structure of a Simple PHP Application as Your Project Grows</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Wed, 29 Mar 2023 18:07:34 +0000</pubDate>
      <link>https://dev.to/dbazhenov/how-to-optimize-the-code-structure-of-a-simple-php-application-as-your-project-grows-4273</link>
      <guid>https://dev.to/dbazhenov/how-to-optimize-the-code-structure-of-a-simple-php-application-as-your-project-grows-4273</guid>
      <description>&lt;p&gt;Let's discuss the structure of our simple PHP application, folders, files, and functions and transform the project to grow it. &lt;/p&gt;

&lt;p&gt;You and I have developed a small PHP application that makes an HTTP request to the GitHub API and stores the result in the database.&lt;br&gt;
Step 1 - &lt;a href="https://dev.to/dbazhenov/how-to-develop-a-simple-web-application-using-docker-compose-nginx-php-8-and-mongodb-6-nhi"&gt;How to Develop a Simple Web Application Using Docker-compose, Nginx, PHP 8, and MongoDB 6&lt;/a&gt;.&lt;br&gt;
Step 2 - &lt;a href="https://dev.to/dbazhenov/how-to-make-http-requests-to-api-in-php-app-using-github-api-example-and-write-to-percona-server-for-mongodb-3gi3"&gt;How to Make HTTP Requests to API in PHP App Using GitHub API Example and Write to Percona Server for MongoDB&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our code already performs different functions: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reading environment variables.&lt;/li&gt;
&lt;li&gt;Connecting to the database.&lt;/li&gt;
&lt;li&gt;Running API queries.&lt;/li&gt;
&lt;li&gt;Looping and writing to the database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code is already hard to fit on the screen, and it's time to think about dividing the code into files, folders, and functions.&lt;/p&gt;

&lt;p&gt;Frameworks are usually responsible for separating code into folders and files. But we don't use frameworks, so we'll do it ourselves. At this stage, we will not make the structure too complicated. Our task is to learn how to change it so that we can change it at any time in the future.&lt;/p&gt;

&lt;p&gt;I prefer to change the structure of files and folders as the project grows, grouping files according to meaning and logic. In this article, we'll make the first change, and we'll do them more times in the future.&lt;/p&gt;

&lt;p&gt;Let's start optimizing our application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initializing and configuring the application
&lt;/h2&gt;

&lt;p&gt;Usually, PHP scripts are executed sequentially starting with index.php, as in our case. &lt;/p&gt;

&lt;p&gt;At the beginning of index.php, we connect the composer libraries, read environment variables, create a database connection, and create an object for HTTP requests. This will be required for each script. I propose to put this in a separate init.php file.&lt;/p&gt;

&lt;p&gt;Create an app/init.php file and move the initialization to it. We simply move the code from the index.php file, leaving only the logic of the script.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/init.php&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

// Enabling Composer Packages
require __DIR__ . '/vendor/autoload.php';

// Get environment variables
$local_conf = getenv();
define('DB_USERNAME', $local_conf['DB_USERNAME']);
define('DB_PASSWORD', $local_conf['DB_PASSWORD']);
define('DB_HOST', $local_conf['DB_HOST']);

// Connect to MongoDB
$db_client = new \MongoDB\Client('mongodb://'. DB_USERNAME .':' . DB_PASSWORD . '@'. DB_HOST . ':27017/');

$app['db'] = $db_client-&amp;gt;selectDatabase('tutorial');

$app['http'] = new \GuzzleHttp\Client();



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You may also notice that I created an array or $app object, and added HTTP and db as array elements. That way I can use $app anywhere to pass to functions and have HTTP queries and database handling there.&lt;/p&gt;

&lt;p&gt;And in the index.php file itself, we simply include our init.php&lt;/p&gt;

&lt;p&gt;_app/index.php&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

require __DIR__ . '/init.php';

dd($app);



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And print &lt;code&gt;db&lt;/code&gt; and &lt;code&gt;http&lt;/code&gt; with &lt;code&gt;dd()&lt;/code&gt; to make sure they work and are available in &lt;code&gt;index.php&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Leave the rest of the code in &lt;code&gt;index.php&lt;/code&gt; unchanged for now and run &lt;code&gt;localhost&lt;/code&gt; in the browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wlhebpo3s4c8j6lqpq9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wlhebpo3s4c8j6lqpq9.jpg" alt="dd(): app, db and http"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's get to the functions
&lt;/h2&gt;

&lt;p&gt;Functions are needed to group code that is executed multiple times. &lt;/p&gt;

&lt;p&gt;Functions can be initialized either in the executable PHP file itself, next to the code, or in a separate file that includes, such as composer libraries. You can pass parameters, variables or arrays into functions and functions can return some result. &lt;/p&gt;

&lt;p&gt;For example, we make a GET HTTP request to the GitHub API at a certain URL. We will probably need to make another type of request to another URL, but it will still be an HTTP request using guzzle.&lt;/p&gt;

&lt;p&gt;I assume in advance that I will have many different functions: general, for GitHub, for the database. So let's create a &lt;code&gt;func&lt;/code&gt; folder in &lt;code&gt;the app&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;And in the &lt;code&gt;app/func&lt;/code&gt; folder, create a github.php file. Create our first function there that will request the GitHub API.&lt;/p&gt;

&lt;p&gt;We'll pack the code responsible for the HTTP request into a function.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/func/github.php&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

function fn_github_api_request($app, $url, $method, $params = []) 
{

    try {
        $response = $app['http']-&amp;gt;request($method, $url , [
            'query' =&amp;gt; $params
        ]);              

        $result = $response-&amp;gt;getBody();

        $result = json_decode($result, true);

    } catch (GuzzleHttp\Exception\ClientException $e) {
        $response = $e-&amp;gt;getResponse();
        $responseBodyAsString = $response-&amp;gt;getBody()-&amp;gt;getContents();
        echo $responseBodyAsString;
    }  

    if (empty($result)) {
        $result = false;
    } 

    return $result;
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;A little clarification, we will modify this file in the future, the first function is not the best. You should understand that you will need to change frequently, changing parameters and variables within functions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now go back to the &lt;code&gt;index.php&lt;/code&gt;, and connect our first function file. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/index.php&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

// Enabling Composer Packages
require __DIR__ . '/init.php';
require __DIR__ . '/func/github.php';

$url = 'https://api.github.com/search/repositories';

$params = [
    'q' =&amp;gt; 'topic:mongodb',
    'sort' =&amp;gt; 'help-wanted-issues'
];

// New function
$repositories = fn_github_api_request($app, $url, 'GET', $params);

if (!empty($repositories['items'])) {
    foreach($repositories['items'] as $key =&amp;gt; $repository) {

        $updateResult = $app['db']-&amp;gt;repositories-&amp;gt;updateOne(
            [
                'id' =&amp;gt; $repository['id'] // query 
            ],
            ['$set' =&amp;gt; $repository],
            ['upsert' =&amp;gt; true]
        );

    }
}

dd($repositories);



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run &lt;code&gt;localhost&lt;/code&gt; and see the same result. Our request was executed, and we got the list of repositories.&lt;/p&gt;

&lt;p&gt;The structure of our code in the app folder will look like this.&lt;br&gt;
&lt;em&gt;app/&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.
├── func
│   └── github.php
├── vendor
├── composer.json
├── index.php
└── init.php


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmtjr13kv5su6pawtto5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmtjr13kv5su6pawtto5.jpg" alt="PHP Functions and result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Loops and pagination
&lt;/h2&gt;

&lt;p&gt;All this time we made only one API request and got the same number of repositories. &lt;/p&gt;

&lt;p&gt;I propose to run our repository retrieval function in a loop and get 1000 repositories.&lt;/p&gt;

&lt;p&gt;We will use the &lt;code&gt;for&lt;/code&gt; loop&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

for ($i =1 ; $i &amp;lt;= 30; $i++) {
    // Do something 
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Where &lt;code&gt;i&lt;/code&gt; will be the page number for request to api. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/index.php&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;?php

// Enabling Composer Packages
require __DIR__ . '/init.php';
require __DIR__ . '/func/github.php';

$url = 'https://api.github.com/search/repositories';

$params = [
    'q' =&amp;gt; 'topic:mongodb',
    'sort' =&amp;gt; 'help-wanted-issues'
];

for ($i = 1; $i &amp;lt;= 30; $i++) {

    $params['page'] = $i;

    $repositories = fn_github_api_request($app, $url, 'GET', $params);

    fn_github_save_repositories($app, $repositories);

    echo "Page: " . $i . " ";
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I converted the index.php file so that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The loop will add the page parameter corresponding to i and execute a request to the api.&lt;/li&gt;
&lt;li&gt;I also made a function fn_github_save_repositories in which I placed the code responsible for saving to the database.&lt;/li&gt;
&lt;li&gt;I added the output of the page sequence number.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When I ran &lt;code&gt;localhost&lt;/code&gt; in the browser and looped, I saw that 9 requests were executed, and then I got an error&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Page: 1 Page: 2 Page: 3 Page: 4 Page: 5 Page: 6 Page: 7 Page: 8 Page: 
9 Page: 10 {"message":"API rate limit exceeded for *.*.*.*.
(But here's the good news: Authenticated requests get a higher rate limit. 
Check out the documentation for more details.)",
"documentation_url":"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}
Page: 11 {"message":"API rate limit exceeded for *.*.*.*. 
(But here's the good news: Authenticated requests get a higher rate limit. 
Check out the documentation for more details.)",
"documentation_url":"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We made requests to the API as unauthorized users and got used to the limits. We need to add authorization. We'll fix that in the next article, as adding authorization will require a few big steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check our database with MongoDB Compass
&lt;/h2&gt;

&lt;p&gt;I ended up with 293 repository documents in the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90tsp5i8urbtboq9sdnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90tsp5i8urbtboq9sdnc.png" alt="MongoDB Compass - GitHub Repositories"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As a result of this modification, we learned how to split one script into several scripts and create functions. We did a lot of queries and increased our database. True, we now know that API has limits.&lt;/p&gt;

&lt;p&gt;You can check and run the source code in &lt;a href="https://github.com/dbazhenov/nginx-php-mongodb-docker-compose/tree/tutorial_2_structure" rel="noopener noreferrer"&gt;the repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the next post, we will continue to modify and improve our application. We will add new functions and features. I'm sure we'll end up with a very useful app, and you'll learn how to develop it.&lt;/p&gt;

&lt;p&gt;Ask me questions, I'll be happy to answer them.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>php</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>Why and how to use Docker-compose for application development</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Fri, 24 Mar 2023 16:08:18 +0000</pubDate>
      <link>https://dev.to/dbazhenov/why-and-how-to-use-docker-compose-for-application-development-34n4</link>
      <guid>https://dev.to/dbazhenov/why-and-how-to-use-docker-compose-for-application-development-34n4</guid>
      <description>&lt;p&gt;I often see examples using docker in manuals and documentation for services and products. &lt;/p&gt;

&lt;p&gt;Docker containers are generally convenient, but we must run them manually with different parameters if we use several containers.&lt;/p&gt;

&lt;p&gt;A simple example&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FrRMjvXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f03luwsvynv5layaee10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FrRMjvXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f03luwsvynv5layaee10.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we need to run PHP + Nginx + MongoDB app, we need to run 3 long commands, and if we need to change them, it will not be convenient.&lt;/p&gt;

&lt;p&gt;Docker-compose comes in, which allows us to describe in a single file which containers we need to run and with which parameters. As I explained in my article on preparing the environment: &lt;a href="https://dev.to/dbazhenov/how-to-develop-a-simple-web-application-using-docker-compose-nginx-php-8-and-mongodb-6-nhi#preparing-dockercompose-with-nginx-php"&gt;How to Develop a Simple Web Application Using Docker-compose, Nginx, PHP 8, and MongoDB 6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, what we'll need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install docker and docker-compose. &lt;/li&gt;
&lt;li&gt;Prepare docker-compose.yml&lt;/li&gt;
&lt;li&gt;Run it and stop it when it is no longer needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can use this locally or run it on a server.&lt;/p&gt;
&lt;h2&gt;
  
  
  Useful Commands
&lt;/h2&gt;

&lt;p&gt;These commands are executed quite often during development.&lt;/p&gt;

&lt;p&gt;Docker-compose runs several containers with the right environment variables and parameters described in the file docker-compose.yaml &lt;/p&gt;

&lt;p&gt;To start them up, we use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We stop it with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The commands must be run from the directory where docker-compose.yml is located.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;docker-compose ps&lt;/code&gt; allows you to see what is currently running.&lt;/p&gt;

&lt;p&gt;The command to connect to one of the containers, if necessary&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it [container-name] bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;[container-name] is available in the list of running containers&lt;/p&gt;

&lt;p&gt;Launch result in console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UBRY03JE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9iwt1iqux5zd0vqi4vae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UBRY03JE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9iwt1iqux5zd0vqi4vae.png" alt="Docker compose up and down" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker-compose features that I use
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Environment variables
&lt;/h3&gt;

&lt;p&gt;It is very convenient and helpful not to keep secrets in the source code. &lt;/p&gt;

&lt;p&gt;Environment variables can be connected as a file or as parameters, or you can make shared environment variables for all containers. This is much more convenient than working with environment variables with a simple docker command.&lt;/p&gt;

&lt;p&gt;For example, API or databases access data or environment startup parameters.&lt;/p&gt;

&lt;p&gt;Simple use of environment variables in docker-compose.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    environment:
      DB_USERNAME: root
      DB_PASSWORD: secret
      DB_HOST: localhost 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am connecting environment variables as a file.&lt;/p&gt;

&lt;p&gt;File docker-compose.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  php-fpm:
    image: php8-mongo
    env_file: 
      - .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;File &lt;code&gt;.env&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MONGODB_USER: dba
MONGODB_PASSWORD: secret
MONGODB_HOST: localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's it, and the variables can be read in any programming language application. In addition, you can use this on any hosting or server, and DevOps will be happy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volumes and file system
&lt;/h3&gt;

&lt;p&gt;I do not use volumes from docker for the files, but I use the volumes parameter to connect and link the local directories and the filesystem within the container. &lt;/p&gt;

&lt;p&gt;I use a local folder on my computer to develop and modify code, which automatically connects to the container for code execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  php-fpm:
    image: php8.2-fpm-mongo
    volumes:
      - ./app:/var/www/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, when running the database container, I prefer to use the main server's or laptop's file system. When the database container starts, I can immediately see the data files and explore, make a backup, or transfer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  mongodb:
    image: "percona/percona-server-mongodb:6.0.4"
    volumes:
      - ./data:/data/db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is handy during development and experimentation.&lt;/p&gt;

&lt;p&gt;Although you could use the built-in volumes by docker.&lt;/p&gt;

&lt;p&gt;Changing application configurations in a container&lt;/p&gt;

&lt;p&gt;Often applications require configuration modifications, usually done by editing a parameter file or config file inside a container.&lt;/p&gt;

&lt;p&gt;For example, changing PHP, MongoDB, or Nginx parameters requires editing those software files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;/usr/local/etc/php-fpm.d/zz-docker.conf&lt;/li&gt;
&lt;li&gt;/etc/mongod.conf&lt;/li&gt;
&lt;li&gt;/etc/nginx/conf.d/default.conf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can connect to the container and edit these files internally, but the next time you run it, the settings will be reset again.&lt;/p&gt;

&lt;p&gt;In this case, I copy the config file from the container to my computer in the project folder and connect it to the inside of the container via docker-compose.yml&lt;/p&gt;

&lt;p&gt;This way, I can edit the parameters in the file, which will be used inside the container when running.&lt;/p&gt;

&lt;p&gt;Example for Nginx&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    volumes:
      - ./config/nginx/default.conf:/etc/nginx/conf.d/default.conf
      - ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, my project has the following file structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KhFhEFEz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ko0ctd8uf9be7pknbnz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KhFhEFEz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ko0ctd8uf9be7pknbnz4.png" alt="Image description" width="756" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I love it, recommend it, and use it.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>docker</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Quick Tips to Learn Monitoring With Grafana</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Wed, 22 Mar 2023 16:49:09 +0000</pubDate>
      <link>https://dev.to/dbazhenov/quick-tips-to-learn-monitoring-with-grafana-23db</link>
      <guid>https://dev.to/dbazhenov/quick-tips-to-learn-monitoring-with-grafana-23db</guid>
      <description>&lt;p&gt;I recommend you try this, get a free cloud-based Grafana at &lt;a href="https://grafana.com/get/"&gt;the official site&lt;/a&gt;. The free resources are enough to get acquainted with monitoring and observability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://grafana.com/grafana/"&gt;Grafana&lt;/a&gt; is a trendy open-source monitoring software.&lt;/p&gt;

&lt;p&gt;If you are learning different technologies, this will be helpful. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You will learn and try monitoring software.&lt;/li&gt;
&lt;li&gt;You can connect your apps and servers to the Grafana cloud and use it for free at the start. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Also, I took some courses today at University Grafana, and I find them very useful because they are quick (about 10 minutes per course) and teach.&lt;/p&gt;

&lt;p&gt;I went through 5 blocks of practice in just one hour on the &lt;a href="https://university.grafana.com/learn/lp/27/playlist-intro-to-visualizing-data-with-dashboards-and-panels"&gt;Playlist: Intro to visualizing data with dashboards and panels&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I recommend spending this hour today learning about software monitoring. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c734YZI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sllz1gqw4v5otylchb78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c734YZI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sllz1gqw4v5otylchb78.png" alt="Grafana: Playlist: Intro to visualizing data with dashboards and panels" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GMZaV0qm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mt1q57vuucy7ktffs0u3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GMZaV0qm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mt1q57vuucy7ktffs0u3.png" alt="Grafana: Dashboards" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>monitoring</category>
      <category>grafana</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Make HTTP Requests to API in PHP App Using GitHub API Example and Write to Percona Server for MongoDB</title>
      <dc:creator>Daniil Bazhenov</dc:creator>
      <pubDate>Tue, 21 Mar 2023 12:02:01 +0000</pubDate>
      <link>https://dev.to/dbazhenov/how-to-make-http-requests-to-api-in-php-app-using-github-api-example-and-write-to-percona-server-for-mongodb-3gi3</link>
      <guid>https://dev.to/dbazhenov/how-to-make-http-requests-to-api-in-php-app-using-github-api-example-and-write-to-percona-server-for-mongodb-3gi3</guid>
      <description>&lt;p&gt;We learn how to work with HTTP requests in PHP and make API requests using the GitHub API as an example. We'll get the data from the API and save it to the database.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/dbazhenov/how-to-develop-a-simple-web-application-using-docker-compose-nginx-php-8-and-mongodb-6-nhi"&gt;the previous article&lt;/a&gt;, we developed a simple application that connects to the MongoDB database (&lt;a href="https://www.percona.com/software/mongodb/percona-server-for-mongodb?utm_source=percona-community&amp;amp;utm_medium=blog&amp;amp;utm_campaign=daniil"&gt;Percona Server for MongoDB&lt;/a&gt;) and writes documents in a loop. We only used &lt;a href="https://getcomposer.org/"&gt;Composer&lt;/a&gt; packages to work with MongoDB. We have set up the Docker-compose environment and have the app/ directory where the application code is located. We can edit the code and check the result in the browser without restarting the Docker containers.&lt;/p&gt;

&lt;p&gt;We plan to make HTTP requests to GitHub API. I prefer using the popular PHP HTTP client library &lt;a href="https://packagist.org/packages/guzzlehttp/guzzle"&gt;guzzlehttp/guzzle&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here we go!  &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Preparation
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://dev.to/dbazhenov/how-to-develop-a-simple-web-application-using-docker-compose-nginx-php-8-and-mongodb-6-nhi"&gt;project&lt;/a&gt; folder in the console or git clone the &lt;a href="https://github.com/dbazhenov/nginx-php-mongodb-docker-compose"&gt;repository&lt;/a&gt;, and open the folder.&lt;/p&gt;

&lt;p&gt;Start Docker-compose&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start the containers, and you can run &lt;code&gt;localhost&lt;/code&gt; in the browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Connect PHP libraries using Composer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://getcomposer.org/"&gt;Composer&lt;/a&gt; is a package manager for PHP. You can find out-of-the-box libraries and functions for almost any task. I prefer to use something other than Frameworks for small projects, but just plug-in packages.&lt;/p&gt;

&lt;p&gt;Open the &lt;code&gt;app/composer.json&lt;/code&gt; file in the code editor.&lt;/p&gt;

&lt;p&gt;Add two new packages, guzzle, and dd.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "require": {
        "guzzlehttp/guzzle": "^7.0", // Guzzle for HTTP
        "larapack/dd": "1.*", // dd for debug
        "mongodb/mongodb": "^1.6",
        "ext-mongodb": "^1.6"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect to the container with php-fpm&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it [php-fpm-container-name] bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the packages with the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;composer update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it, we have the packages installed, more about Composer is &lt;a href="https://getcomposer.org/doc/01-basic-usage.md"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Let's choose an API and read the rules of use.
&lt;/h2&gt;

&lt;p&gt;I chose &lt;a href="https://docs.github.com/en/rest/search?apiVersion=2022-11-28#search-repositories"&gt;the GitHub API&lt;/a&gt; as an example because it is available with and without authorization and has good documentation. &lt;/p&gt;

&lt;p&gt;First, we will get a list of repositories. We will use the repository search API for all repositories containing the topic mongodb.&lt;/p&gt;

&lt;p&gt;I opened &lt;a href="https://docs.github.com/en/rest/search?apiVersion=2022-11-28#search-repositories"&gt;the REST API&lt;/a&gt; documentation and found the method, parameters, and examples I needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mx_g_tOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4pgy6456btmrnohjb5x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mx_g_tOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4pgy6456btmrnohjb5x.jpg" alt="GitHub REST API" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Let's make the first request to the API.
&lt;/h2&gt;

&lt;p&gt;Open the index.php file in the app folder and add the following code somewhere at the beginning, after the inclusion of &lt;code&gt;vendor/autoload.php&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$http = new \GuzzleHttp\Client();

$url = 'https://api.github.com/search/repositories';

$params = [
    'q' =&amp;gt; 'topic:mongodb'
];

try {
    $response = $http-&amp;gt;request('GET', $url , [
        'query' =&amp;gt; $params
    ]);              

    $result = $response-&amp;gt;getBody();

    $result = json_decode($result, true);

} catch (GuzzleHttp\Exception\ClientException $e) {
    $response = $e-&amp;gt;getResponse();
    $responseBodyAsString = $response-&amp;gt;getBody()-&amp;gt;getContents();
    echo $responseBodyAsString;
}  

dd($result);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To summarize, here is what we did:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initiated the HTTP client Guzzle.&lt;/li&gt;
&lt;li&gt;Created a variable with a URL from GitHub API.&lt;/li&gt;
&lt;li&gt;Created array with query parameters, in our case q search string.&lt;/li&gt;
&lt;li&gt;Executed request in Try-Catch construct; if a request has errors, an exception will be thrown, and we can read the error. &lt;/li&gt;
&lt;li&gt;Received response results from API into variable $result.&lt;/li&gt;
&lt;li&gt;Converted the response from JSON to an array.&lt;/li&gt;
&lt;li&gt;Printed out the array in a convenient form using dd($result).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Run &lt;code&gt;localhost&lt;/code&gt; in the browser and see the result. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7CtVJSyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rt8f954csbfoe15lkzkq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7CtVJSyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rt8f954csbfoe15lkzkq.jpg" alt="GitHub REST API - first request to the API" width="800" height="704"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Exploring the result
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;dd()&lt;/code&gt; is a function that helps print any variable, array, or object. Just list them separated by commas.&lt;/p&gt;

&lt;p&gt;You have printed the response from the API and see the array:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;total_count - says that there are 72945 repositories for our query. This looks like an excellent dataset for our experiments with analytics and the database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;items - contains an array of 30 other arrays with repository data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I suggest modifying our API query to get a slightly different result. We'll add sorting to the query.&lt;/p&gt;

&lt;p&gt;Change the $params array in the index.php file.&lt;/p&gt;

&lt;p&gt;Let's add sorting by the count of &lt;code&gt;help-wanted-issues&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/index.php&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$params = [
    'q' =&amp;gt; 'topic:mongodb+language:php',
    'sort' =&amp;gt; 'help-wanted-issues'
];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run localhost in your browser, and you will see a different set of items in the API response.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Let's save the data to the database
&lt;/h2&gt;

&lt;p&gt;Instead of dd(), we add a foreach loop over items and print one repository data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/index.php&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (!empty($result['items'])) {
    foreach($result['items'] as $key =&amp;gt; $repository) {

        dd($repository);

    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, we see the data. This will be our document in the collection of repositories. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qN5SvX9i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmrapwomz9diin2h8gcv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qN5SvX9i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmrapwomz9diin2h8gcv.jpg" alt="GitHub REST API - save the data to the database" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Replace &lt;code&gt;dd($repository)&lt;/code&gt; with an insert query to the MongoDB database.&lt;/p&gt;

&lt;p&gt;Remember to move the client db initialization, database connection, and read environment variables.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;app/index.php&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (!empty($result['items'])) {
    foreach($result['items'] as $key =&amp;gt; $repository) {

        $updateResult = $db-&amp;gt;repositories-&amp;gt;updateOne(
            [
                'id' =&amp;gt; $repository['id'] // query 
            ],
            ['$set' =&amp;gt; $repository],
            ['upsert' =&amp;gt; true]
        );

    }
}

dd($result);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;localhost&lt;/code&gt; in your browser. I usually always type dd() at the end, since it stops the script and displays something like dd('finish').&lt;/p&gt;

&lt;p&gt;You may notice that making a updateOne request to the repositories collection, with the parameter upsert. This means that if such a document with an id ('id' =&amp;gt; $repository['id']) already exists in the database, it will be updated. If not, it will be inserted. This allows me to avoid duplicate data with multiple test queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Connecting to a database using MongoDB Compass.
&lt;/h2&gt;

&lt;p&gt;We must see that we did everything correctly and the data appeared in the database.&lt;/p&gt;

&lt;p&gt;Besides, we can play with them.&lt;/p&gt;

&lt;p&gt;Open our new collection and check it out. We can also create an index by the id key in the Indexes tab. If you did not make the index with a PHP query.&lt;/p&gt;

&lt;p&gt;Download and install MongoDB Compass &lt;a href="https://www.mongodb.com/products/compass"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Cl185Uw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0pncuz4myqk2sxioh9f9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Cl185Uw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0pncuz4myqk2sxioh9f9.jpg" alt="GitHub REST API - MongoDB Compass" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OwheuQDp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aqv5kzfeoyp6umup6hps.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OwheuQDp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aqv5kzfeoyp6umup6hps.jpg" alt="the GitHub REST API - MongoDB Compass" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The code of the application at the current stage can be obtained from &lt;a href="https://github.com/dbazhenov/nginx-php-mongodb-docker-compose/tree/tutorial_step_1_api"&gt;the repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We did a great job and figured out how to make requests to the API, as you can make any HTTP requests.&lt;/p&gt;

&lt;p&gt;In my next posts, we'll figure out how to add authorization and work with API limits. The point is that GitHub lets unauthorized users make a few requests. We aim to get a lot of valuable data from the API and will do that soon.&lt;/p&gt;

&lt;p&gt;We will also devote some time to the structure of the files and folders of our application. It's already getting big, and we must make it more convenient.&lt;/p&gt;

</description>
      <category>php</category>
      <category>beginners</category>
      <category>mongodb</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
