<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: JohnDotOwl</title>
    <description>The latest articles on DEV Community by JohnDotOwl (@johndotowl).</description>
    <link>https://dev.to/johndotowl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/johndotowl"/>
    <language>en</language>
    <item>
      <title>PostgreSQL 17 Installation on Ubuntu 24.04</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Thu, 10 Oct 2024 07:58:53 +0000</pubDate>
      <link>https://dev.to/johndotowl/postgresql-17-installation-on-ubuntu-2404-5bfi</link>
      <guid>https://dev.to/johndotowl/postgresql-17-installation-on-ubuntu-2404-5bfi</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
PostgreSQL 17 is the latest major release of the popular open source relational database. It comes with many new features and improvements such as enhanced monitoring capabilities, improved performance, logical replication enhancements, additional server configurations, and security advancements.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will cover how to install PostgreSQL 17 on Ubuntu 22.04 We will also look at some basic configuration to allow remote connections, enable password authentication, and get started with creating users, databases etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Ubuntu 24.04&lt;br&gt;
Root privileges or sudo access&lt;br&gt;
use &lt;code&gt;sudo su&lt;/code&gt; to get into root instead of ubuntu(default user)&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1 - Add PostgreSQL Repository
&lt;/h3&gt;

&lt;p&gt;First, update the package index and install required packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the PostgreSQL 17 repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" &amp;gt; /etc/apt/sources.list.d/pgdg.list'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Import the repository signing key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the package list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 - Install PostgreSQL 17
&lt;/h3&gt;

&lt;p&gt;Install PostgreSQL 17 and contrib modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install postgresql-17
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start and enable PostgreSQL service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start postgresql
sudo systemctl enable postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the version and ensure it's Postgresql 17:&lt;br&gt;
&lt;code&gt;psql --version&lt;/code&gt;&lt;br&gt;
You should get something like &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;psql (PostgreSQL) 17.0 (Ubuntu 17.0-1.pgdg24.04+1)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Step 3 - Configure PostgreSQL 17
&lt;/h3&gt;

&lt;p&gt;Edit postgresql.conf to allow remote connections by changing listen_addresses to *:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/postgresql/17/main/postgresql.conf
listen_addresses = '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure PostgreSQL to use md5 password authentication by editing pg_hba.conf , this is important if you wish to connect remotely e.g. via &lt;a href="https://pgadmin.org/" rel="noopener noreferrer"&gt;PGADMIN&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sed -i '/^host/s/ident/md5/' /etc/postgresql/17/main/pg_hba.conf
sudo sed -i '/^local/s/peer/trust/' /etc/postgresql/17/main/pg_hba.conf
echo "host all all 0.0.0.0/0 md5" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart PostgreSQL for changes to take effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allow PostgreSQL port through the firewall:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 5432/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 - Connect to PostgreSQL
&lt;/h3&gt;

&lt;p&gt;Connect as the postgres user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -u postgres psql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set a password for postgres user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER USER postgres PASSWORD 'VeryStronGPassWord@1137';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
We have successfully installed PostgreSQL 17 on Ubuntu, performed some basic configuration like enabling remote connections, set up password authentication, created a database and users. PostgreSQL is now ready to be used for development or production workloads.&lt;/p&gt;

&lt;p&gt;PostgreSQL 17 has significant improvements and is highly recommended as an upgrade.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vacuum memory optimization: Up to 20x less memory consumption&lt;/li&gt;
&lt;li&gt;Write throughput: Up to 2x better for high concurrency workloads&lt;/li&gt;
&lt;li&gt;Query performance: Improved for IN clauses using B-tree indexes&lt;/li&gt;
&lt;li&gt;JSON support: Added JSON_TABLE, JSON constructors, and query functions&lt;/li&gt;
&lt;li&gt;COPY command: Up to 2x faster when exporting large rows&lt;/li&gt;
&lt;li&gt;Logical replication: No need to drop slots during major version upgrades&lt;/li&gt;
&lt;li&gt;Backups: New incremental backup support in pg_basebackup&lt;/li&gt;
&lt;li&gt;Monitoring: Added progress reporting for index vacuuming&lt;/li&gt;
&lt;li&gt;I/O improvements: New streaming I/O interface for faster sequential scans&lt;/li&gt;
&lt;li&gt;MERGE enhancements: Added RETURNING clause and ability to update views&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Building and Running a Rust Application with Docker on Mac (Apple Silicon)</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Wed, 13 Mar 2024 15:43:23 +0000</pubDate>
      <link>https://dev.to/johndotowl/building-and-running-a-rust-application-with-docker-on-mac-apple-silicon-1p88</link>
      <guid>https://dev.to/johndotowl/building-and-running-a-rust-application-with-docker-on-mac-apple-silicon-1p88</guid>
      <description>&lt;p&gt;This guide will walk you through the process of building and running a Rust application using Docker. The provided Dockerfile is designed to build a Rust application for the ARM64 architecture and run it in a lightweight Debian container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerfile
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# syntax=docker/dockerfile:1

################################################################################
# Create a stage for building the application.

ARG RUST_VERSION=1.69.0
ARG APP_NAME=my_app_name
FROM --platform=linux/arm64 rust:${RUST_VERSION}-slim-bullseye AS build
ARG APP_NAME
WORKDIR /app
RUN apt-get update &amp;amp;&amp;amp; apt-get upgrade -y
RUN apt-get install -y pkg-config openssl libssl-dev ca-certificates
RUN \
  apt-get install ca-certificates &amp;amp;&amp;amp; \
  apt-get clean
# Build the application.
# Leverage a cache mount to /usr/local/cargo/registry/
# for downloaded dependencies and a cache mount to /app/target/ for 
# compiled dependencies which will speed up subsequent builds.
# Leverage a bind mount to the src directory to avoid having to copy the
# source code into the container. Once built, copy the executable to an
# output directory before the cache mounted /app/target is unmounted.
RUN --mount=type=bind,source=src,target=src \
    --mount=type=bind,source=Cargo.toml,target=Cargo.toml \
    --mount=type=bind,source=Cargo.lock,target=Cargo.lock \
    --mount=type=cache,target=/app/target/ \
    --mount=type=cache,target=/usr/local/cargo/registry/ \
    &amp;lt;&amp;lt;EOF
set -e
rustup target add aarch64-unknown-linux-gnu
cargo build --locked --release --target aarch64-unknown-linux-gnu
cp ./target/aarch64-unknown-linux-gnu/release/$APP_NAME /bin/server
EOF

################################################################################
# Create a new stage for running the application that contains the minimal
# runtime dependencies for the application. This often uses a different base
# image from the build stage where the necessary files are copied from the build
# stage.
#
# The example below uses the debian bullseye image as the foundation for running the app.
# By specifying the "bullseye-slim" tag, it will also use whatever happens to be the
# most recent version of that tag when you build your Dockerfile. If
# reproducibility is important, consider using a digest
# (e.g., debian@sha256:ac707220fbd7b67fc19b112cee8170b41a9e97f703f588b2cdbbcdcecdd8af57).
FROM --platform=linux/arm64 debian:bullseye-slim AS final

# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user
ARG UID=10001
RUN adduser \
    --disabled-password \
    --gecos "" \
    --home "/nonexistent" \
    --shell "/sbin/nologin" \
    --no-create-home \
    --uid "${UID}" \
    appuser
USER appuser

# Copy the executable from the "build" stage.
COPY --from=build /bin/server /bin/

# Expose the port that the application listens on.
EXPOSE 1000

# What the container should run when it is started.
CMD ["/bin/server"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding the Dockerfile
&lt;/h3&gt;

&lt;p&gt;The Dockerfile consists of two stages: a build stage and a final stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build Stage
&lt;/h3&gt;

&lt;p&gt;The build stage is responsible for compiling the Rust application. It uses the official Rust Docker image as a base and installs the necessary dependencies. The Dockerfile then leverages several Docker features to optimize the build process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bind Mounts&lt;/strong&gt;: The source code is mounted into the container using a bind mount, avoiding the need to copy the code into the container for each build.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cache Mounts&lt;/strong&gt;: The Cargo registry and compiled dependencies are cached using cache mounts, speeding up subsequent builds by reusing downloaded and compiled artifacts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The build stage compiles the Rust application using &lt;code&gt;cargo build --release&lt;/code&gt; and copies the resulting binary to &lt;code&gt;/bin/server&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Stage
&lt;/h3&gt;

&lt;p&gt;The final stage creates a minimal runtime environment for the application using the &lt;code&gt;debian:bullseye-slim&lt;/code&gt; image as a base. It performs the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a Non-Privileged User&lt;/strong&gt;: A non-privileged user named appuser is created to run the application, following best practices for container security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Copy the Binary&lt;/strong&gt;: The compiled binary from the build stage is copied into the final image at /bin/server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Expose Port:&lt;/strong&gt; The container exposes port 1000, which is the port the application listens on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Default Command&lt;/strong&gt;: The container is configured to run the /bin/server binary when started.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Building the Docker Image
&lt;/h3&gt;

&lt;p&gt;To build the Docker image, run the following command in the directory containing the Dockerfile:&lt;br&gt;
&lt;code&gt;docker build --platform linux/arm64 -t my_app_name .&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the Docker Container
&lt;/h3&gt;

&lt;p&gt;To run the Docker container, use the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name my_app_name -p 1000:1000 --restart on-failure --memory 1g --memory-swap 1g my_app_name&lt;/code&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>docker</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>Laravel 10 FrankenPHP with PHP 8.3 with Ubuntu 22.04</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Wed, 24 Jan 2024 13:43:04 +0000</pubDate>
      <link>https://dev.to/johndotowl/laravel-10-frankenphp-with-php-83-with-ubuntu-2204-2a14</link>
      <guid>https://dev.to/johndotowl/laravel-10-frankenphp-with-php-83-with-ubuntu-2204-2a14</guid>
      <description>&lt;p&gt;Operation System : Ubuntu 22.04 ARM &lt;br&gt;
Server Provider : AWS - t4g.micro (2vCPU with 1GB Ram)&lt;/p&gt;

&lt;p&gt;Before getting started with FrankenPHP, you need to prepare your operation system with libraries and PHP required for Laravel.&lt;/p&gt;

&lt;p&gt;Update and Upgrade your Operating System&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade
sudo apt install zip unzip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install PHP 8.3&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add Ondrej's PPA
sudo add-apt-repository ppa:ondrej/php 
# Press enter when prompted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do another update &amp;amp; upgrade to grab the latest files from PHP 8.3(Ondrej's PPA)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Install new PHP 8.3 packages
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install php8.3 php8.3-cli php8.3-xml php8.3-{bz2,curl,mbstring,intl}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to the work directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /var/www
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Composer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
sudo php -r "if (hash_file('sha384', 'composer-setup.php') === 'e21205b207c3ff031906575712edab6f13eb0b361f2085f1f1237b7126d785e826a450292b6cfd1d64d92e6563bbde02') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
sudo php composer-setup.php
sudo php -r "unlink('composer-setup.php');"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most likely, you want to put the composer.phar into a directory on your PATH, so you can simply call composer from any directory (Global install), using for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv composer.phar /usr/local/bin/composer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo composer create-project laravel/laravel project_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;oops i'm still editing&lt;/p&gt;

</description>
      <category>php</category>
      <category>laravel</category>
      <category>nginx</category>
      <category>webdev</category>
    </item>
    <item>
      <title>StreamingLLM - 4 Million Token , 22x Faster</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Tue, 10 Oct 2023 02:14:50 +0000</pubDate>
      <link>https://dev.to/johndotowl/streamingllm-4-million-token-22x-faster-422i</link>
      <guid>https://dev.to/johndotowl/streamingllm-4-million-token-22x-faster-422i</guid>
      <description>&lt;p&gt;Recent advances in natural language processing have led to the development of large language models (LLMs) like GPT-3 that can generate remarkably human-like text. However, a major limitation of LLMs is that their performance degrades rapidly as more context is provided. Attempting to feed LLMs unlimited data results in the model slowing down and eventually crashing.&lt;/p&gt;

&lt;p&gt;A new technique called Streaming LM aims to solve this problem by allowing unlimited context to be provided to LLMs without sacrificing speed or overloading memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  How LLMs Currently Work
&lt;/h3&gt;

&lt;p&gt;LLMs like GPT-3 are based on the transformer architecture. Each new token that is processed increases the model's complexity quadratically. So feeding more data to the model exponentially increases training time and memory usage.&lt;/p&gt;

&lt;p&gt;Current approaches to dealing with long context sequences include attention windowing, where older tokens are discarded, or keeping only the most recent tokens. However, both these methods result in the model losing the broader context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd4qq6td0d1n9lc0ozbg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd4qq6td0d1n9lc0ozbg.gif" alt="Image description" width="56" height="27"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Insights Behind Streaming LLM
&lt;/h3&gt;

&lt;p&gt;The creators of Streaming LLM made an interesting observation about how attention is distributed in LLMs. They found that most of the attention is paid to the first few tokens, with diminishing attention paid to later tokens.&lt;/p&gt;

&lt;p&gt;So even with a large context sequence, the first few tokens and the most recent tokens contain the majority of relevant information. The tokens in the middle receive negligible attention.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Streaming LLM Works
&lt;/h3&gt;

&lt;p&gt;Streaming LLM takes advantage of this attention phenomenon. Instead of feeding all tokens to the model, it provides:&lt;/p&gt;

&lt;p&gt;The first few important tokens (attention syncs)&lt;br&gt;
A rolling cache of the most recent tokens&lt;br&gt;
As new tokens are added, expired tokens from the middle are dropped without significantly impacting performance.&lt;/p&gt;

&lt;p&gt;This approach allows virtually unlimited context to be provided to the LLM while maintaining efficiency and avoiding memory issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does This Solve LLMs' Context Limitations?
&lt;/h3&gt;

&lt;p&gt;For certain use cases like long-form content generation, the Streaming LLM approach works very well. The context limit is essentially removed.&lt;/p&gt;

&lt;p&gt;However, for cases like summarizing academic papers, detailed context will still be lost. So there are still limitations to how much context LLMs can effectively utilize.&lt;/p&gt;

&lt;p&gt;Nonetheless, Streaming LLM is an exciting first step towards enabling LLMs to leverage far more data context than previously possible. The potential to enhance LLMs' understanding of long conversations and documents is immense.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Future Possibilities
&lt;/h3&gt;

&lt;p&gt;While Streaming LLM has its limitations, it opens up news avenues for feeding more data to LLMs. With further research, more advanced techniques could be developed to allow LLMs to thoroughly comprehend vastly more information.&lt;/p&gt;

&lt;p&gt;The future of LLMs is bright. With innovations like Streaming LLM, models stand to become capable of far more complex and nuanced language understanding and generation.&lt;/p&gt;

&lt;p&gt;Github - &lt;a href="https://github.com/mit-han-lab/streaming-llm" rel="noopener noreferrer"&gt;https://github.com/mit-han-lab/streaming-llm&lt;/a&gt;&lt;br&gt;
Research Paper - &lt;a href="https://arxiv.org/abs/2309.17453" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2309.17453&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>chatgpt</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Optimizing PostgreSQL Configuration with PgTune</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Mon, 02 Oct 2023 11:16:28 +0000</pubDate>
      <link>https://dev.to/johndotowl/optimizing-postgresql-configuration-with-pgtune-217k</link>
      <guid>https://dev.to/johndotowl/optimizing-postgresql-configuration-with-pgtune-217k</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;PostgreSQL is a powerful open source relational database management system. However, like any complex software, PostgreSQL has numerous configuration parameters that can be tweaked and tuned for optimal performance. Choosing the right configuration settings can sometimes be difficult, but luckily there are tools available to help. One such tool is pgTune, which provides recommendations for PostgreSQL configuration based on your hardware specs and intended database workload. In this post, we’ll look at how to use pgTune to optimize your PostgreSQL installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1aihgjqrt55d086xnau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1aihgjqrt55d086xnau.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview of PgTune
&lt;/h3&gt;

&lt;p&gt;PgTune is a web-based tool created by PostgreSQL experts to simplify the process of configuring PostgreSQL settings. The pgTune website provides an easy-to-use form where you enter details about your hardware and database workload. PgTune then analyzes this information and provides a configuration file with optimized parameters.&lt;/p&gt;

&lt;p&gt;Some of the key features of PgTune include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input fields for CPU, RAM, disks, database size, connections, etc.&lt;/li&gt;
&lt;li&gt;Support for PostgreSQL versions 9.0 and above.&lt;/li&gt;
&lt;li&gt;Settings optimized for OLTP or DW/OLAP workloads.&lt;/li&gt;
&lt;li&gt;Additional tweaks for SSD storage, cloud environments, and more.&lt;/li&gt;
&lt;li&gt;Output as a postgresql.conf file or comparison to your current config.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using PgTune
&lt;/h3&gt;

&lt;p&gt;Using pgTune to optimize your PostgreSQL config involves three simple steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Gather details about your hardware specs and database workload. This includes information like CPU cores, RAM size, disk types/sizes, database size, max connections, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the details into the pgTune web form. PgTune has sections for server hardware, storage, workload characterization, and other parameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download the generated postgresql.conf file. The output will be a complete config file with pgTune's recommended settings based on what you entered.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

/etc/postgresql/16/main/postgresql.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once you have the optimized postgresql.conf file, you can replace your existing config with it to implement pgTune's recommendations. Make sure to restart PostgreSQL after replacing the config file.&lt;/p&gt;

&lt;h3&gt;
  
  
  PgTune Settings Overview
&lt;/h3&gt;

&lt;p&gt;Some of the key PostgreSQL settings that pgTune will optimize include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shared buffers - Sets memory for cache&lt;/li&gt;
&lt;li&gt;Work memory - For sorting operations&lt;/li&gt;
&lt;li&gt;Maintenance work mem - For admin tasks like VACUUM&lt;/li&gt;
&lt;li&gt;Checkpoint segments - For crash recovery&lt;/li&gt;
&lt;li&gt;Effective cache size - Estimated memory for indexes/data&lt;/li&gt;
&lt;li&gt;Synchronous commit - For write performance vs reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Modifying postgresql.conf
&lt;/h3&gt;

&lt;p&gt;Make a backup copy of the existing postgresql.conf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo cp /etc/postgresql/16/main/postgresql.conf /etc/postgresql/16/main/postgresql.conf.bak


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Reboot after editing it and if it fails , switch back to the original conf. using the following command. &lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;sudo cp /etc/postgresql/16/main/postgresql.conf.bak /etc/postgresql/16/main/postgresql.conf&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;sudo systemctl restart postgresql&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;PgTune is a great open source tool for simplifying one of the most complex parts of PostgreSQL - configuration tuning. By automatically generating an optimized postgresql.conf based on your system specs and database workload, pgTune makes it easy to get the most out of PostgreSQL performance. The tool is constantly updated by expert PostgreSQL administrators and is highly recommended for any serious Postgres user. Give pgTune a try and see if it can take your database performance to the next level!&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>sql</category>
      <category>backend</category>
    </item>
    <item>
      <title>Host Docker Image Builds with GitHub</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Mon, 25 Sep 2023 12:37:58 +0000</pubDate>
      <link>https://dev.to/johndotowl/host-docker-image-builds-with-github-3jeo</link>
      <guid>https://dev.to/johndotowl/host-docker-image-builds-with-github-3jeo</guid>
      <description>&lt;p&gt;Docker has become an essential tool for developing and deploying applications. Pairing Docker with GitHub workflows allows you to easily build Docker images and push them to registries like GitHub Container Registry (GHCR) on every code change. In this post, I'll show you how to set up a GitHub workflow to build and publish a Docker image.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Sample Project
&lt;/h3&gt;

&lt;p&gt;For this example, let's say we have a simple Node.js application called project-name that we want to Dockerize. The source code is hosted in a GitHub repo.&lt;/p&gt;

&lt;p&gt;Our goal is to build a Docker image and push it to GHCR every time code is pushed to GitHub. This will allow us to always have an up-to-date Docker image built from the latest source.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the GitHub Workflow
&lt;/h3&gt;

&lt;p&gt;GitHub workflows are defined in YAML files stored in the .github/workflows directory of your repo. Let's create a file called docker-image.yml with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Docker Image CI for GHCR

on: push

jobs:
  build_and_publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: build and push the image
        run: |
          docker login --username username --password ${{secrets.GITHUB_TOKEN}} ghcr.io
          docker build . --tag ghcr.io/username/project-name:latest
          docker push ghcr.io/username/project-name:latest

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This defines a workflow that will run on every push to GitHub. It has one job called &lt;code&gt;build_and_publish&lt;/code&gt; that will build the Docker image and push it to GHCR.&lt;/p&gt;

&lt;p&gt;The steps use the GitHub Actions checkout action to clone the repo code. Then we build the image, tag it with the GHCR repository path, and push it.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; secret allows us to authenticate with GHCR.&lt;/p&gt;

&lt;h3&gt;
  
  
  Triggering the Workflow
&lt;/h3&gt;

&lt;p&gt;With the workflow defined, we can now trigger it by pushing code to GitHub.&lt;/p&gt;

&lt;p&gt;For example, if we make a small change and push it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -m "Update README"
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will trigger the workflow, build the image, and push it to your GHCR repo.&lt;/p&gt;

&lt;p&gt;We now have automated Docker image builds on every code change! The same pattern can be used to build and publish images to Docker Hub or other registries.&lt;/p&gt;

&lt;p&gt;GitHub workflows are powerful for automating all kinds of development tasks. For Docker specifically, they provide an easy way to implement continuous integration and deployment pipelines.&lt;/p&gt;

&lt;p&gt;Abstract&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This post shows one of the most minimal GitHub workflows for automated Docker image builds on code pushes. The straightforward YAML provides a great example of no-frills Docker CI/CD.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>github</category>
      <category>docker</category>
      <category>containers</category>
      <category>beginners</category>
    </item>
    <item>
      <title>PostgreSQL 16 Installation on Ubuntu 22.04</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Sun, 24 Sep 2023 11:43:55 +0000</pubDate>
      <link>https://dev.to/johndotowl/postgresql-16-installation-on-ubuntu-2204-51ia</link>
      <guid>https://dev.to/johndotowl/postgresql-16-installation-on-ubuntu-2204-51ia</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
PostgreSQL 16 is the latest major release of the popular open source relational database. It comes with many new features and improvements such as enhanced monitoring capabilities, improved performance, logical replication enhancements, additional server configurations, and security advancements.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will cover how to install PostgreSQL 16 on Ubuntu 22.04 We will also look at some basic configuration to allow remote connections, enable password authentication, and get started with creating users, databases etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Ubuntu 22.04&lt;br&gt;
Root privileges or sudo access&lt;br&gt;
use &lt;code&gt;sudo su&lt;/code&gt; to get into root instead of ubuntu(default user)&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1 - Add PostgreSQL Repository
&lt;/h3&gt;

&lt;p&gt;First, update the package index and install required packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install gnupg2 wget nano
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the PostgreSQL 16 repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" &amp;gt; /etc/apt/sources.list.d/pgdg.list'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Import the repository signing key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the package list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 - Install PostgreSQL 16
&lt;/h3&gt;

&lt;p&gt;Install PostgreSQL 16 and contrib modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install postgresql-16 postgresql-contrib-16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start and enable PostgreSQL service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start postgresql
sudo systemctl enable postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the version and ensure it's Postgresql 16:&lt;br&gt;
&lt;code&gt;psql --version&lt;/code&gt;&lt;br&gt;
You should get something like &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;psql (PostgreSQL) 16.0 (Ubuntu 16.0-1.pgdg22.04+1)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Step 3 - Configure PostgreSQL 16
&lt;/h3&gt;

&lt;p&gt;Edit postgresql.conf to allow remote connections by changing listen_addresses to *:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/postgresql/16/main/postgresql.conf
listen_addresses = '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure PostgreSQL to use md5 password authentication by editing pg_hba.conf , this is important if you wish to connect remotely e.g. via &lt;a href="https://pgadmin.org/" rel="noopener noreferrer"&gt;PGADMIN&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sed -i '/^host/s/ident/md5/' /etc/postgresql/16/main/pg_hba.conf
sudo sed -i '/^local/s/peer/trust/' /etc/postgresql/16/main/pg_hba.conf
echo "host all all 0.0.0.0/0 md5" | sudo tee -a /etc/postgresql/16/main/pg_hba.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart PostgreSQL for changes to take effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allow PostgreSQL port through the firewall:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 5432/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 - Connect to PostgreSQL
&lt;/h3&gt;

&lt;p&gt;Connect as the postgres user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -u postgres psql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set a password for postgres user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER USER postgres PASSWORD 'VeryStronGPassWord@1137';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
We have successfully installed PostgreSQL 16 on Ubuntu, performed some basic configuration like enabling remote connections, set up password authentication, created a database and users. PostgreSQL is now ready to be used for development or production workloads.&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>sql</category>
    </item>
    <item>
      <title>Why Your H1 and Title Should Be Different for SEO</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Thu, 14 Sep 2023 13:08:39 +0000</pubDate>
      <link>https://dev.to/johndotowl/why-your-h1-and-title-should-be-different-for-seo-349m</link>
      <guid>https://dev.to/johndotowl/why-your-h1-and-title-should-be-different-for-seo-349m</guid>
      <description>&lt;h2&gt;
  
  
  Why Your H1 and Title Should Be Different for SEO
&lt;/h2&gt;

&lt;p&gt;Search engine optimization (SEO) is crucial for driving relevant organic traffic to your website. Two key on-page elements that influence SEO are your title tag and H1 heading.&lt;/p&gt;

&lt;p&gt;While your title and H1 should complement each other, they serve different purposes and should not be identical. Here's why:&lt;/p&gt;

&lt;h2&gt;
  
  
  The Title Tag Focuses on Keywords
&lt;/h2&gt;

&lt;p&gt;Your title tag appears in search engine results pages (SERPs) as the clickable link. It should contain your most important keywords and compel users to click. Having your target keyword appear early in the title tag signals to search engines what the page is about.&lt;/p&gt;

&lt;p&gt;The title also needs to make sense for users. Closely mirroring the H1 in the title tag could result in odd, lengthy titles.&lt;/p&gt;

&lt;h2&gt;
  
  
  The H1 Summarizes the Content
&lt;/h2&gt;

&lt;p&gt;Your H1 appears on the page itself as the main heading. It should summarize or introduce the content that follows. The H1 gives structure to your content for both users and search bots.&lt;/p&gt;

&lt;p&gt;Repeating the title tag as the H1 would result in repetitive, potentially awkward headings. It also wastes an opportunity to include related keywords.&lt;/p&gt;

&lt;h2&gt;
  
  
  Complement Without Copying
&lt;/h2&gt;

&lt;p&gt;The title and H1 should complement each other without being identical. For example, the title could focus on a location-based keyword while the H1 introduces the content topic.&lt;/p&gt;

&lt;p&gt;Having variation also gives search engines more signals about your page's relevance. It helps avoid keyword stuffing.&lt;/p&gt;

&lt;p&gt;In summary, keep your title tag and H1 heading focused on your target keywords, but make them distinct. This balance will help optimize the page for both search engines and users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1
&lt;/h3&gt;

&lt;p&gt;Title: Affordable Pet Grooming in Los Angeles&lt;br&gt;
H1: Professional Pet Grooming Services&lt;/p&gt;

&lt;p&gt;The title focuses on the location and service keywords while the H1 introduces the content topic in more detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2
&lt;/h3&gt;

&lt;p&gt;Title: Best Mountain Bike Trails in Vermont&lt;br&gt;
H1: Top Mountain Biking Trails to Try in Vermont&lt;/p&gt;

&lt;p&gt;Again the title targets the location and keywords, while the H1 sets up the content as a list of recommendations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 3
&lt;/h3&gt;

&lt;p&gt;Title: How to Potty Train Your Puppy in 7 Days&lt;br&gt;
H1: A Step-by-Step Guide to Potty Train Any Puppy in Just 7 Days&lt;/p&gt;

&lt;p&gt;The title highlights the keyword phrase "potty train your puppy" while the H1 elaborates on the content as a guide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 4
&lt;/h3&gt;

&lt;p&gt;Title: Best Plumbers in Chicago - Top 10 Options&lt;br&gt;
H1: Finding a Reliable Plumber in Chicago&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Google's point of view?
&lt;/h2&gt;

&lt;p&gt;It is a bad idea to duplicate your title tag content in your first-level header. If your page's title and h1 tags match, the latter may appear over-optimized to search engines.&lt;br&gt;
Also, using the same content in titles and headers means a lost opportunity to incorporate other relevant keywords for your page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.google.com/search/docs/appearance/title-link" rel="noopener noreferrer"&gt;https://developers.google.com/search/docs/appearance/title-link&lt;/a&gt;&lt;/p&gt;

</description>
      <category>seo</category>
    </item>
    <item>
      <title>Redirect to 404 - NextJs 13 App Directory</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Mon, 04 Sep 2023 13:39:25 +0000</pubDate>
      <link>https://dev.to/johndotowl/redirect-to-404-nextjs-13-app-directory-3hc2</link>
      <guid>https://dev.to/johndotowl/redirect-to-404-nextjs-13-app-directory-3hc2</guid>
      <description>&lt;p&gt;The next/navigation package introduced in Next.js 13 App Directory makes it easy to handle 404 pages and redirect users when a page is not found.&lt;/p&gt;

&lt;p&gt;One way to do this is with the notFound() function. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { notFound } from 'next/navigation';

export default async function Component({ params }) {
  const { slug } = params;

  const res = await fetch(`https://api.example.com/item/${slug}`);

  if (res.status === 404) {
    notFound();
  }

  const item = await res.json();

  // rest of component
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This fetches data for an item based on a slug. If the API returns a 404 status, we call notFound().&lt;/p&gt;

&lt;p&gt;This will redirect the user to the custom 404 page defined in pages/404.js. It also shows the 404 page UI instantly without waiting for the fetch call, providing a smooth experience.&lt;/p&gt;

&lt;p&gt;The key things to note:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;notFound()&lt;/code&gt; prevents the rest of the component from rendering if called&lt;br&gt;
It shows the 404 page immediately, no loading state&lt;br&gt;
Custom 404 page is defined in src/app/not-found.tsx as usual&lt;br&gt;
Some other examples of when you may want to use notFound():&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User visits a deleted page&lt;/li&gt;
&lt;li&gt;Invalid slug parameter&lt;/li&gt;
&lt;li&gt;Authentication failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;notFound()&lt;/code&gt; function makes handling 404s clean and idiomatic in Next.js. Give it a try in your Next 13 app!&lt;/p&gt;

&lt;p&gt;Let me know if you would like me to expand on any part of this post. I'm happy to include more details on how to customize the 404 page, integrate with an CMS, or any other use cases!&lt;/p&gt;

</description>
      <category>nextjs</category>
    </item>
    <item>
      <title>Dynamic Metadata NextJS 13 App Directory</title>
      <dc:creator>JohnDotOwl</dc:creator>
      <pubDate>Sat, 19 Aug 2023 05:40:45 +0000</pubDate>
      <link>https://dev.to/johndotowl/dynamic-metadata-nextjs-13-app-directory-1iek</link>
      <guid>https://dev.to/johndotowl/dynamic-metadata-nextjs-13-app-directory-1iek</guid>
      <description>&lt;p&gt;When building a website with Next.js 13, you'll likely need to display dynamic metadata like product names. There are two approaches for handling metadata in Next.js app directory: static and dynamic.&lt;/p&gt;

&lt;p&gt;Static metadata looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const metadata: Metadata = {
  title: '...',
  description: '...',
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, dynamic metadata is more common for sites displaying changing content. After researching solutions, I realized the only way to show dynamic metadata was calling an API on each page. But fetching metadata from an API on every page felt inefficient.&lt;/p&gt;

&lt;p&gt;That's when I discovered request memoization - deduplicating similar API requests to avoid wasting resources. Here's how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a shared cache for metadata API responses&lt;/li&gt;
&lt;li&gt;Before fetching, check if response already exists in cache&lt;/li&gt;
&lt;li&gt;If it does, return cached data instead of calling API again&lt;/li&gt;
&lt;li&gt;If not, make API call and cache response
By memoizing metadata API requests, we avoid duplicated network requests. The metadata is still dynamic, but now fetched only once per unique content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This optimizations allows displaying dynamic SEO-friendly metadata in Next.js 13 without taxing our API. Request memoization is a simple but powerful strategy for improving performance in React and Next.js apps.&lt;/p&gt;

&lt;p&gt;If you need a code reference, check it out below!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Link from 'next/link';
import { Metadata, ResolvingMetadata } from 'next';
interface Params {
  slug: string;
}

export async function generateMetadata({
  params,
}: {
  params: Params;
}): Promise&amp;lt;Metadata&amp;gt; {
  const { slug } = params;
  // read route params
  const url = 'https://api.example.com/v1/component/' + slug;

  // fetch data
  const component_item = await fetch(url).then((res) =&amp;gt; res.json());
  console.log(component_item);

  return {
    title: component_item.name,
    description: component_item.description,
  };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Request Memoization&lt;/p&gt;

&lt;p&gt;React extends the fetch API to automatically memoize requests that have the same URL and options. This means you can call a fetch function for the same data in multiple places in a React component tree while only executing it once.&lt;/p&gt;

&lt;p&gt;For example, if you need to use the same data across a route (e.g. in a Layout, Page, and multiple components), you do not have to fetch data at the top of the tree then forward props between components. Instead, you can fetch data in the components that need it without worrying about the performance implications of making multiple requests across the network for the same data.&lt;/p&gt;

&lt;p&gt;Resources for Dynamic Metadata&lt;br&gt;
&lt;a href="https://javascript.plainenglish.io/mastering-metadata-in-next-js-a-comprehensive-guide-to-seo-excellence-ab9c2cf0dc35" rel="noopener noreferrer"&gt;https://javascript.plainenglish.io/mastering-metadata-in-next-js-a-comprehensive-guide-to-seo-excellence-ab9c2cf0dc35&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nextjs.org/docs/app/building-your-application/caching#request-memoization" rel="noopener noreferrer"&gt;https://nextjs.org/docs/app/building-your-application/caching#request-memoization&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.slingacademy.com/article/next-js-how-to-set-page-title-and-meta-description/" rel="noopener noreferrer"&gt;https://www.slingacademy.com/article/next-js-how-to-set-page-title-and-meta-description/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.builder.io/blog/routing-seo-metadata-nextjs-13" rel="noopener noreferrer"&gt;https://www.builder.io/blog/routing-seo-metadata-nextjs-13&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>programming</category>
      <category>react</category>
    </item>
  </channel>
</rss>
