<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Max Diamond</title>
    <description>The latest articles on DEV Community by Max Diamond (@dmdboi).</description>
    <link>https://dev.to/dmdboi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dmdboi"/>
    <language>en</language>
    <item>
      <title>Automated PostgreSQL Backups in Docker: Complete Guide with pg_dump</title>
      <dc:creator>Max Diamond</dc:creator>
      <pubDate>Fri, 12 Sep 2025 19:30:48 +0000</pubDate>
      <link>https://dev.to/dmdboi/automated-postgresql-backups-in-docker-complete-guide-with-pgdump-52a</link>
      <guid>https://dev.to/dmdboi/automated-postgresql-backups-in-docker-complete-guide-with-pgdump-52a</guid>
      <description>&lt;p&gt;Databases fail for many reasons: a bad deployment, dropped tables, disk corruption, or even something as simple as accidentally running a destructive query. When PostgreSQL is containerized, another layer of risk is introduced. If your container or volume is removed without a backup, your data is gone for good.&lt;/p&gt;

&lt;p&gt;If you are serious about your application, automated PostgreSQL backups are non-negotiable. In this guide, we'll walk through practical approaches to setting up automated backups for PostgreSQL inside Docker. We'll cover the tools PostgreSQL provides, how to schedule backups, how to rotate and prune them, and what to consider for larger or production systems.&lt;/p&gt;

&lt;p&gt;By the end, you'll have a working strategy that runs on autopilot and keeps your database safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Postgres Setup
&lt;/h2&gt;

&lt;p&gt;For demonstration, here’s a minimal &lt;code&gt;docker-compose.yml&lt;/code&gt; file running PostgreSQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:16&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myuser&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mypass&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydb&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db_data:/var/lib/postgresql/data&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you a working Postgres instance with persistent storage. The database lives inside the &lt;code&gt;db_data&lt;/code&gt; volume. If that volume is lost or corrupted, so is your data. Backups need to live outside of this container lifecycle.&lt;/p&gt;

&lt;p&gt;Want to deploy PostgreSQL on your server? Check out our guide to &lt;a href="https://serversinc.io/blog/deploy-postgresql-on-a-vps-using-docker" rel="noopener noreferrer"&gt;deploying PostgreSQL on a VPS using Docker&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Manual Backup with pg_dump
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;pg_dump&lt;/code&gt; is the most common way to generate a logical backup of a PostgreSQL database. It exports your schema and data into a SQL file, which makes it portable and easy to restore. Running it inside Docker is straightforward with &lt;code&gt;docker exec&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;postgres pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works fine for smaller databases, but keep in mind that logical dumps can take a long time and consume significant disk space as your dataset grows. For very large databases, you might prefer physical backups with &lt;code&gt;pg_basebackup&lt;/code&gt; or file system snapshots. For most applications though, &lt;code&gt;pg_dump&lt;/code&gt; strikes the right balance between simplicity and reliability.&lt;/p&gt;

&lt;p&gt;The downside of running &lt;code&gt;pg_dump&lt;/code&gt; manually is that it only works if you remember to do it. And let’s be honest, engineers are great at automating everything except the things they’re supposed to do daily. So lets get into automating the backups so you can sleep easy at night knowing your data is safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Backups with a Sidecar Container
&lt;/h2&gt;

&lt;p&gt;To automate backups, you can run a lightweight container alongside your database that executes &lt;code&gt;pg_dump&lt;/code&gt; on a schedule. A common approach is to loop with &lt;code&gt;sleep&lt;/code&gt;, writing a new dump file every day.&lt;/p&gt;

&lt;p&gt;Here’s an example backup service added to &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;backup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:16&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pg_backup&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./backups:/backups&lt;/span&gt;
    &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;bash -c 'while true; do&lt;/span&gt;
        &lt;span class="s"&gt;pg_dump -h db -U myuser mydb &amp;gt; /backups/db-$(date +%F-%H-%M-%S).sql;&lt;/span&gt;
        &lt;span class="s"&gt;sleep 86400;&lt;/span&gt;
      &lt;span class="s"&gt;done'&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;PGPASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mypass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connects to the &lt;code&gt;db&lt;/code&gt; service.&lt;/li&gt;
&lt;li&gt;Dumps the database once every 24 hours.&lt;/li&gt;
&lt;li&gt;Saves the output into a mounted &lt;code&gt;./backups&lt;/code&gt; directory with a timestamp.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s simple, but surprisingly effective. For many small projects, this sidecar approach is all you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scheduling postgresql backups with Cron
&lt;/h2&gt;

&lt;p&gt;While looping with &lt;code&gt;sleep&lt;/code&gt; inside a container is fine for simple setups, cron is the tool most engineers reach for when they want proper scheduling. Cron gives you full control over when backups run, how often, and how they integrate with other maintenance tasks.&lt;/p&gt;

&lt;p&gt;On the host machine, you can add a cron entry like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 2 * * * docker exec postgres pg_dump -U myuser mydb &amp;gt; ~/backups/db-$(date +\%F).sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new backup every day at 2 AM. If you need more frequent dumps, you can schedule them hourly or every few minutes just by adjusting the expression.&lt;/p&gt;

&lt;p&gt;The advantage of cron is its flexibility. You can add post-processing steps in the same job, such as compressing the file or pushing it to cloud storage. And unlike a backup container running in a loop, cron makes it easy to see and manage schedules across your server.&lt;/p&gt;

&lt;p&gt;For environments where cron is not available, you can achieve the same effect inside a dedicated container running a cron daemon. This approach keeps all logic inside Docker and avoids relying on host-level configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compression to Save Space
&lt;/h2&gt;

&lt;p&gt;As soon as you start keeping more than a handful of backups, file size becomes an issue. Logical dumps are plain text, and plain text compresses very well. The solution is to pipe the output through a compression tool like &lt;code&gt;gzip&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;postgres pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb | &lt;span class="nb"&gt;gzip&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; backup.sql.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can reduce backup file sizes by 70-90%, depending on your data. For production systems with daily backups, compression is essential to keep storage costs manageable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup Retention and Cleanup
&lt;/h2&gt;

&lt;p&gt;Without proper cleanup, your backup directory will eventually consume all available disk space. A good retention policy keeps recent backups readily available while pruning older ones to save space.&lt;/p&gt;

&lt;p&gt;Here's a simple retention strategy using &lt;code&gt;find&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Keep daily backups for 7 days, delete older ones&lt;/span&gt;
find ./backups &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"db-*.sql.gz"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +7 &lt;span class="nt"&gt;-delete&lt;/span&gt;

&lt;span class="c"&gt;# More sophisticated: keep daily for 7 days, weekly for 4 weeks&lt;/span&gt;
find ./backups &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"db-*.sql.gz"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +7 &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*-01.sql.gz"&lt;/span&gt; &lt;span class="nt"&gt;-delete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can integrate this directly into your cron job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Complete backup and cleanup pipeline  &lt;/span&gt;
0 2 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; docker &lt;span class="nb"&gt;exec &lt;/span&gt;postgres pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb | &lt;span class="nb"&gt;gzip&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ~/backups/db-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +&lt;span class="se"&gt;\%&lt;/span&gt;F&lt;span class="si"&gt;)&lt;/span&gt;.sql.gz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; find ~/backups &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"db-*.sql.gz"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +7 &lt;span class="nt"&gt;-delete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For production systems, consider a more nuanced retention policy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hourly backups for the last 24 hours&lt;/li&gt;
&lt;li&gt;Daily backups for the last 7 days&lt;/li&gt;
&lt;li&gt;Weekly backups for the last 4 weeks&lt;/li&gt;
&lt;li&gt;Monthly backups for the last 12 months&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Restoring PostgreSQL Backups
&lt;/h2&gt;

&lt;p&gt;Creating backups is only half the story. You need to know how to restore them when disaster strikes. Here's how to restore from your compressed backups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Restore to existing database (will overwrite existing data)&lt;/span&gt;
&lt;span class="nb"&gt;gunzip&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; backup.sql.gz | docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb

&lt;span class="c"&gt;# Restore to a new database for testing&lt;/span&gt;
docker &lt;span class="nb"&gt;exec &lt;/span&gt;postgres createdb &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb_test
&lt;span class="nb"&gt;gunzip&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; backup.sql.gz | docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb_test

&lt;span class="c"&gt;# Restore from uncompressed backup&lt;/span&gt;
docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-U&lt;/span&gt; myuser mydb &amp;lt; backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Always test your restore process on a non-production database first. A backup is only as good as your ability to restore from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;A complete PostgreSQL backup solution combines automated scheduling, compression, and retention policies. Here's a production-ready example that ties everything together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# Complete backup script with error handling and cleanup&lt;/span&gt;
&lt;span class="nv"&gt;BACKUP_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/backups"&lt;/span&gt;
&lt;span class="nv"&gt;DB_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mydb"&lt;/span&gt;
&lt;span class="nv"&gt;RETENTION_DAYS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;7
&lt;span class="nv"&gt;DATE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%F_%H-%M-%S&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;BACKUP_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/db-&lt;/span&gt;&lt;span class="nv"&gt;$DATE&lt;/span&gt;&lt;span class="s2"&gt;.sql.gz"&lt;/span&gt;

&lt;span class="c"&gt;# Create backup with compression&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;postgres pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; myuser &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DB_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;gzip&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Backup successful: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="nv"&gt;$BACKUP_FILE&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# Clean up old backups&lt;/span&gt;
    find &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"db-*.sql.gz"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +&lt;span class="nv"&gt;$RETENTION_DAYS&lt;/span&gt; &lt;span class="nt"&gt;-delete&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Cleaned up backups older than &lt;/span&gt;&lt;span class="nv"&gt;$RETENTION_DAYS&lt;/span&gt;&lt;span class="s2"&gt; days"&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Backup failed!"&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this as a script, make it executable, and add it to your crontab. This gives you a reliable backup system that runs automatically and manages storage space.&lt;/p&gt;

&lt;p&gt;The approaches in this guide scale from development environments to production systems. Start with the sidecar container for simplicity, then move to cron-based scheduling as your needs grow. The key is consistency: automated backups that run regularly are far more valuable than perfect backups that never happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplify with Serversinc
&lt;/h2&gt;

&lt;p&gt;Sound like a lot of work? Serversinc handles all the complexity for you. Our platform combines container orchestration, cron APIs for scheduling backup commands, volume management, and automated notifications, so you can set up reliable PostgreSQL backups without the operational overhead.&lt;/p&gt;

&lt;p&gt;Try Serversinc free at &lt;a href="https://serversinc.io" rel="noopener noreferrer"&gt;serversinc.io&lt;/a&gt; and get your database backups automated in minutes.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>docker</category>
      <category>webdev</category>
      <category>containers</category>
    </item>
    <item>
      <title>Deploy PostgreSQL on a VPS using Docker</title>
      <dc:creator>Max Diamond</dc:creator>
      <pubDate>Mon, 01 Sep 2025 07:02:10 +0000</pubDate>
      <link>https://dev.to/dmdboi/deploy-postgresql-on-a-vps-using-docker-22ej</link>
      <guid>https://dev.to/dmdboi/deploy-postgresql-on-a-vps-using-docker-22ej</guid>
      <description>&lt;p&gt;PostgreSQL is one of the most popular open-source relational databases, trusted for its stability, performance, and reliability. Whether you’re building web apps, APIs, or data-driven platforms, PostgreSQL is often the go-to choice for developers.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll go through how to deploy a PostgreSQL docker container on a VPS. For this, we'll be using &lt;a href="https://serversinc.io" rel="noopener noreferrer"&gt;Serversinc&lt;/a&gt;, a server management platform that makes deploying apps and services a breeze.&lt;/p&gt;

&lt;p&gt;You’ll also learn two recommended methods for connecting to PostgreSQL on Serversinc:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connecting an application on the same server directly to Postgres&lt;/li&gt;
&lt;li&gt;Accessing Postgres from your local machine or another remote server using an SSH tunnel&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create a new application for Postgres
&lt;/h2&gt;

&lt;p&gt;To begin, you’ll create a new application inside the Serversinc dashboard that will run the official PostgreSQL Docker image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;    In the Serversinc dashboard, navigate to My Applications.&lt;/li&gt;
&lt;li&gt;    Click Create new application.&lt;/li&gt;
&lt;li&gt;    Select the server where you’d like to deploy PostgreSQL from the dropdown&lt;/li&gt;
&lt;li&gt;    Give the application a name, for example: Postgres.&lt;/li&gt;
&lt;li&gt;    Leave the Type as Docker Image.&lt;/li&gt;
&lt;li&gt;    Port: You don’t need to specify a port unless you plan on exposing PostgreSQL externally (which isn’t recommended for most setups). For internal usage, leave this blank.&lt;/li&gt;
&lt;li&gt;    Leave Domain blank.&lt;/li&gt;
&lt;li&gt;    In Image &amp;amp; Tag, enter &lt;code&gt;postgres:latest&lt;/code&gt; (You can also pin to a specific version, such as &lt;code&gt;postgres:15&lt;/code&gt;, if you want consistency across deployments.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserversinc.io%2Fstorage%2FmCPalV549XjDzleQnGmnorZsFrSdFM8Ib2TnUtuN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserversinc.io%2Fstorage%2FmCPalV549XjDzleQnGmnorZsFrSdFM8Ib2TnUtuN.png" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;    Click Create application.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After creating the app, you’ll be redirected to the Application overview page. This page is your central management hub for the Postgres container, where you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    View deployment history&lt;/li&gt;
&lt;li&gt;    Check environment variables&lt;/li&gt;
&lt;li&gt;    Manage attached volumes&lt;/li&gt;
&lt;li&gt;    Inspect the currently running container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where you’ll return any time you want to update, redeploy, or check the status of your PostgreSQL instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure environment variables
&lt;/h2&gt;

&lt;p&gt;PostgreSQL relies on a few environment variables when the container starts. These tell it which user to create, what password to assign, and what the default database should be.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the Environment tab of your new Postgres application.&lt;/li&gt;
&lt;li&gt;Add the required environment variables. For example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POSTGRES_USER=myuser
POSTGRES_PASSWORD=securepassword
POSTGRES_DB=mydb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These values will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new database user (&lt;code&gt;POSTGRES_USER&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Assign it a password (&lt;code&gt;POSTGRES_PASSWORD&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Create a default database (&lt;code&gt;POSTGRES_DB&lt;/code&gt;) owned by that user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can add additional environment variables if needed (such as locale or encoding options), but for most cases, these three are enough to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Add a volume for persistence
&lt;/h2&gt;

&lt;p&gt;By default, Docker containers are ephemeral, meaning all data is stored inside the container. If you redeploy, update, or remove the container, the database would be lost. To prevent this, you should mount a volume so that PostgreSQL writes data to the server’s disk.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Volumes tab of your Postgres application.&lt;/li&gt;
&lt;li&gt;Create a new volume and mount it to Postgres’s data directory:
&lt;code&gt;postgres-data:/var/lib/postgresql/data&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Save the volume configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserversinc.io%2Fstorage%2FGrbdWwm2j8zWhP0DGktIoXtkZ1AqO93ZLW8kJWcC.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserversinc.io%2Fstorage%2FGrbdWwm2j8zWhP0DGktIoXtkZ1AqO93ZLW8kJWcC.png" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ensure the postgres-data directory exists too, you can run commands on your server from the Dashboard to create it beforehand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now your database files will persist across redeployments and container restarts, ensuring your data stays safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Deploy PostgreSQL
&lt;/h2&gt;

&lt;p&gt;With everything configured, you’re ready to launch your PostgreSQL container.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the Deployments tab in your Postgres application.&lt;/li&gt;
&lt;li&gt;Click Deploy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Serversinc will now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull the official &lt;strong&gt;PostgreSQL Docker image&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Apply your environment variables (user, password, database)&lt;/li&gt;
&lt;li&gt;Mount the persistent volume for data storage&lt;/li&gt;
&lt;li&gt;Start the container on your selected server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within a few moments, PostgreSQL will be up and running, ready for connections. You can confirm the deployment status and logs directly from the &lt;strong&gt;Application overview&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;That’s it — you now have a fully functioning PostgreSQL database running on your server, managed through Serversinc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Connecting to Postgres
&lt;/h2&gt;

&lt;p&gt;Once your PostgreSQL container is running, you can connect to it in a few ways depending on your setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1: Connect from an application on the same server
&lt;/h3&gt;

&lt;p&gt;If your app is running on the &lt;strong&gt;same server&lt;/strong&gt; as PostgreSQL, you can connect directly using the container’s internal network:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host: localhost or the container’s internal hostname&lt;/li&gt;
&lt;li&gt;Port: 5432 (default PostgreSQL port)&lt;/li&gt;
&lt;li&gt;Username: The value you set in &lt;code&gt;POSTGRES_USER&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Password: The value you set in &lt;code&gt;POSTGRES_PASSWORD&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Database: The value you set in &lt;code&gt;POSTGRES_DB&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, in Node.js using pg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Pool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myuser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;securepassword&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mydb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT NOW()&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method keeps your database private and avoids exposing PostgreSQL to the internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: Connect via SSH tunnel
&lt;/h3&gt;

&lt;p&gt;If you need to connect from your local machine or another remote server, an SSH tunnel lets you securely access PostgreSQL without opening ports publicly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -L 5432:localhost:5432 user@your-server-ip&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;user@your-server-ip&lt;/code&gt; → Replace with your server login&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;5432:localhost:5432&lt;/code&gt; → Maps your local port to the server’s PostgreSQL port&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the tunnel is active, you can connect using &lt;code&gt;localhost:5432&lt;/code&gt; as if you were on the server itself.&lt;/p&gt;

&lt;p&gt;This method ensures that your database remains secure behind your server’s firewall while still allowing remote access.&lt;/p&gt;

&lt;p&gt;Not already using SSH Keys? Read our guide on &lt;a href="https://serversinc.io/blog/how-to-generate-ssh-keys-and-connect-to-your-vps-securely-2025" rel="noopener noreferrer"&gt;how to Generate SSH Keys and Connect to Your VPS Securely&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of running PostgreSQL on Serversinc
&lt;/h2&gt;

&lt;p&gt;Running PostgreSQL on Serversinc combines the power of a self-hosted database with simple management and monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Easy Deployment&lt;/strong&gt;: Launch a Postgres container in just a few clicks, with environment variables and persistent storage pre-configured.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full Control&lt;/strong&gt;: Retain complete control over your server, database versions, and configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrated Monitoring&lt;/strong&gt;: Check logs, view deployment history, and receive alerts if your Postgres container stops — all from the dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Deploying PostgreSQL on Serversinc is fast, secure, and fully under your control. In just a few clicks, you can launch a containerized Postgres instance, configure environment variables, attach a persistent volume, and connect your applications on the same server or remotely via SSH.&lt;/p&gt;

&lt;p&gt;With Serversinc’s dashboard, you can monitor your database, view logs, and receive alerts if anything goes wrong, giving you peace of mind and full visibility.&lt;/p&gt;

&lt;p&gt;For small projects up to production-grade applications, running PostgreSQL on Serversinc makes database management simple, reliable, and efficient.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>postgres</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to install NGINX on ubuntu 22.04</title>
      <dc:creator>Max Diamond</dc:creator>
      <pubDate>Fri, 22 Sep 2023 16:19:39 +0000</pubDate>
      <link>https://dev.to/dmdboi/how-to-install-nginx-on-ubuntu-2204-1p28</link>
      <guid>https://dev.to/dmdboi/how-to-install-nginx-on-ubuntu-2204-1p28</guid>
      <description>&lt;h2&gt;
  
  
  What is NGINX?
&lt;/h2&gt;

&lt;p&gt;NGINX, pronounced as "engine-x," is an open-source, high-performance web server, reverse proxy server, and load balancer that has redefined the way web applications are delivered to users worldwide. In the world of node, NGINX can be considered the go-to web server for connecting your apps to internet. So lets dive into how to set it up on Ubuntu 22.04, and how to configure your app with server blocks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing NGINX on Ubuntu 22.04
&lt;/h2&gt;

&lt;p&gt;Firstly, we'll need to install NGINX on your server. If you're using servers provided by Digital Ocean, this may already be installed and running. &lt;/p&gt;

&lt;p&gt;To install NGINX, start by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, you can head to your server's IP address to see the default NGINX page. &lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring a Server Block
&lt;/h2&gt;

&lt;p&gt;Assuming you have an application running, likely with PM2, we'll move onto configuring a server block to enable access from the internet.&lt;/p&gt;

&lt;p&gt;Firstly, we'll create a new file in &lt;code&gt;/etc/nginx/sites-available/[yourapp].com .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Within this file, we'll add the following config. Be sure to change the &lt;a href="http://localhost:3333" rel="noopener noreferrer"&gt;http://localhost:3333&lt;/a&gt; to point at the port your app is running on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;  # Port to listen on (typically 80 for HTTP)
    server_name yourapp.com www.yourapp.com;  # Your domain name(s)

    location / {
        proxy_pass http://localhost:3333;  # Your Express app's address
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

    # Optionally, configure access and error logs
    access_log /var/log/nginx/yourapp_access.log;
    error_log /var/log/nginx/yourapp_error.log;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this file using &lt;code&gt;CTRL + X&lt;/code&gt; and &lt;code&gt;Y&lt;/code&gt; .&lt;/p&gt;

&lt;h2&gt;
  
  
  Reload NGINX
&lt;/h2&gt;

&lt;p&gt;Before we go ahead and restart NGINX, we'll test that NGINX is happy with our changes using &lt;code&gt;sudo nginx -t&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, we'll restart NGINX using &lt;code&gt;sudo systemctl restart nginx&lt;/code&gt; so our changes take effect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing your app
&lt;/h2&gt;

&lt;p&gt;As long as your app is running, and NGINX is configured properly, you should be able to access it via your Server's IP address or a domain pointing towards your server.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
