DEV Community

Cover image for Implementing JWT Authentication in Rust using Axum
Simon Bittok
Simon Bittok

Posted on

Implementing JWT Authentication in Rust using Axum

This is part 3 of this series.

Previous: Part 2: Implementing Logging

Source Code

The github repository is here.

Quick Recap

In Part 1, we set up our Rust project with Axum, configured environment-based settings, and created a basic error handling system.
In Part 2, we set up our tracing system, which logs HTTP lifecycle using event spans. Now we'll set up our database both PostgreSQL & Redis.

Dependencies

Add the following dependencies to the project.

cargo add sqlx -F "runtime-tokio-rustls, postgres, macros, uuid, chrono"
cargo add uuid -F "serde,v4"
cargo add chrono -F serde
cargo add redis -F tokio-comp
cargo add argon2
cargo add jsonwebtoken -F rust_crypto
Enter fullscreen mode Exit fullscreen mode

Then install sqlx-cli using cargo i.e. cargo install --locked sqlx-cli or cargo binstall sqlx-cli.

Database Infrastructure

We will use Docker to spin up these containers quickly. To get started, create a compose.yaml file in the root directory of the project and add configure it as follows.

services:
  postgres:
    image: postgres:latest-alpine
    container_name: auth-postgres
    restart: unless-stopped
    env_file:
      - .env
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:latest-alpine
    container_name: auth-redis
    restart: unless-stopped
    env_file:
      - .env
    volumes:
      - redis_data:/data
    ports:
      - "6379:6379"

volumes:
  postgres_data:
  redis_data:

Enter fullscreen mode Exit fullscreen mode

Create a .env file again at the root and add the following variables.

POSTGRES_USER = postgres
POSTGRES_PASSWORD = postgres
POSTGRES_DB = postgres

DATABASE_URL = postgresql://postgres:postgres@localhost:5432/postgres

REDIS_URL = redis://localhost:6379

Enter fullscreen mode Exit fullscreen mode

Then at the root of the file run the command docker compose up -d. This will pull the images from docker hub then build and start your containers. To tear them down you can run at the root docker compose down -v.

SQLx and Migrations

Run source .env. This will set the DATABASE_URL environment varialble needed by sqlx-cli. Then check that the variable has been set echo $DATABASE_URL it should print this postgresql://postgres:postgres@localhost:5432/postgres.

To create a pair of 'down-up' migrations run this at the root of your project.

sqlx migrate add -r init
Enter fullscreen mode Exit fullscreen mode

This will create a new folder migrations with two SQL files.
In the up SQL file add the following.

-- Add up migration script here
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

CREATE TABLE "users" (
    id SERIAL PRIMARY KEY,
    pid UUID NOT NULL UNIQUE DEFAULT uuid_generate_v4(),
    email VARCHAR(255) NOT NULL UNIQUE,
    name VARCHAR(255) NOT NULL,
    password VARCHAR(255) NOT NULL,
    created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX idx_user_email ON users(email);
CREATE INDEX idx_user_pid ON users(pid);


CREATE OR REPLACE FUNCTION update_timestamp()
RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER update_user_updated_at_trigger
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION update_timestamp();
Enter fullscreen mode Exit fullscreen mode

and in the down file add this.

-- Add down migration script here

-- Triggers
DROP TRIGGER IF EXISTS update_user_updated_at_trigger ON users;

-- Indices
DROP INDEX IF EXISTS idx_user_pid;
DROP INDEX IF EXISTS idx_user_email;

-- Tables
DROP TABLE IF EXISTS users;

-- Functions
DROP FUNCTION IF EXISTS update_timestamp;

-- Extensions
DROP EXTENSION IF EXISTS "uuid-ossp";

Enter fullscreen mode Exit fullscreen mode

The up migration will create a "users" table in the database when we run the migration script and the down will revert drop the same table along with all the other scripts we created.

Now run the migration.

sqlx migrate run
Enter fullscreen mode Exit fullscreen mode

To check if your migration has been applied you can run the following command.

 docker exec -it auth-postgres psql -U postgres
Enter fullscreen mode Exit fullscreen mode

This will open the psql shell. Run this command after the postgres=# prompt: \dt. You should see two table an _sqlx_migrations and then your users one.

To exit \q.

Storage Configuration.

In the configuration files add the database and redis configs.

# Other config
database:
  uri: postgresql://postgres:postgres@localhost:5432/postgres
  username: postgres
  password: postgres
  host: localhost
  port: 5432
  database: postgres
  ssl: false

redis:
  uri: redis://localhost:6379

Enter fullscreen mode Exit fullscreen mode

Create a db.rs file inside the config module. Then add the following configuration.

use redis::{Client, aio::MultiplexedConnection};
use serde::{Deserialize, Serialize};
use sqlx::{
    ConnectOptions, PgPool,
    postgres::{PgConnectOptions, PgSslMode},
};
use tracing::log::LevelFilter;

use crate::Result;

#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct DatabaseConfig {
    uri: String,
    username: String,
    host: String,
    password: String,
    database: String,
    port: u16,
    ssl: bool,
}

impl DatabaseConfig {
    pub async fn pool(&self) -> PgPool {
        let ssl_mode = if self.ssl {
            PgSslMode::Require
        } else {
            PgSslMode::Prefer
        };

        let mut options = PgConnectOptions::new()
            .host(&self.host)
            .username(&self.username)
            .password(&self.password)
            .port(self.port)
            .ssl_mode(ssl_mode)
            .database(&self.database);

        options = options.log_statements(LevelFilter::Debug);

        PgPool::connect_lazy_with(options)
    }

    pub fn url(&self) -> &str {
        &self.uri
    }
}

#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct RedisConfig {
    uri: String,
}

impl RedisConfig {
    pub fn client(&self) -> Result<Client> {
        Client::open(self.uri()).map_err(Into::into)
    }

    pub async fn multiplexed_connection(&self) -> Result<MultiplexedConnection> {
        self.client()?
            .get_multiplexed_async_connection()
            .await
            .map_err(Into::into)
    }

    pub fn uri(&self) -> &str {
        &self.uri
    }
}

Enter fullscreen mode Exit fullscreen mode

In our Config struct add the database & redis fields.

#[derive(Debug, Deserialize, Clone)]
pub struct Config {
    database: DatabaseConfig,
    redis: RedisConfig,
}

impl Config {
   // Rest of the code.

    pub fn redis(&self) -> &RedisConfig {
        &self.redis
    }

    pub fn database(&self) -> &DatabaseConfig {
        &self.database
    }

}

Enter fullscreen mode Exit fullscreen mode

Expand the errors inside error module.

#[derive(Debug, thiserror::Error)]
 pub enum Error{
   // Other errors
   #[error(transparent)]
    Sqlx(#[from] sqlx::Error),
    #[error(transparent)]
    Migrate(#[from] sqlx::migrate::MigrateError),
    #[error(transparent)]
    Redis(#[from] redis::RedisError),
}
Enter fullscreen mode Exit fullscreen mode

Authentication Configuration

Before we start signing up users, we require the necessary infrastructure to provide authentication. Remember we will be using JWT with Asymmetic RSA keys to sign and verify keys. Lets generate the keys into a security dir inside the config dir.

Generating Keys.

At the root of the project run this command.

mkdir -p config/security/keys
Enter fullscreen mode Exit fullscreen mode

This will create those two new directories 'security' and 'keys'.

Now to generate the access key pair and the refresh ones we will use openssl command found in Linux & MacOs. Run this command.

openssl genrsa -out config/security/keys/access_key.pem 4096
openssl rsa -in config/security/keys/access_key.pem -pubout -out config/security/keys/access_key_pub.pem
Enter fullscreen mode Exit fullscreen mode

The first command will generate a private key and store it inside the access_key.pem file. The 2nd command will extract the public key from the private key file and write it to the access_key_pub.pem file.

Now to generate the refresh keys just repeat the same command but replace all occurrences of access with refresh.

In your gitignore file add the key format: *.pem. This will not include any of your secret files inside your repo.

AuthConfig

In the configuration yaml file add the following contents.

auth:
  access:
    # Path to the keys relative to Cargo.toml file.
    private_key: config/security/keys/access_key.pem
    public_key: config/security/keys/access_key_pub.pem
    exp: 900 # Seconds 15 minutes
  refresh:
    private_key: config/security/keys/refresh_key.pem
    public_key: config/security/keys/refresh_key_pub.pem
    exp: 2419200 # Seconds 4 Weeks
Enter fullscreen mode Exit fullscreen mode

Inside the config module. Create a auth.rs file and add the following contents.

use std::path::PathBuf;

use jsonwebtoken::{DecodingKey, EncodingKey};
use serde::Deserialize;

use crate::Result;

#[derive(Debug, Deserialize, Clone)]
pub struct RsaJwtConfig {
    private_key: PathBuf,
    public_key: PathBuf,
    exp: i64,
}

impl RsaJwtConfig {
    pub fn encoding_key(&self) -> Result<EncodingKey> {
        let contents = std::fs::read_to_string(&self.private_key)?;

        EncodingKey::from_rsa_pem(contents.as_bytes()).map_err(Into::into)
    }

    pub fn decoding_key(&self) -> Result<DecodingKey> {
        let contents = std::fs::read_to_string(&self.public_key)?;

        DecodingKey::from_rsa_pem(contents.as_bytes()).map_err(Into::into)
    }

    pub fn exp(&self) -> i64 {
        self.exp
    }
}

#[derive(Debug, Deserialize, Clone)]
pub struct AuthConfig {
    access: RsaJwtConfig,
    refresh: RsaJwtConfig,
}

impl AuthConfig {
    pub fn access(&self) -> &RsaJwtConfig {
        &self.access
    }

    pub fn refresh(&self) -> &RsaJwtConfig {
        &self.refresh
    }
}

Enter fullscreen mode Exit fullscreen mode

Then, like we have already done with the errors and Config struct derive from jsonwebtoken::error::Error and add an auth field with a getter method.

AppContext

In a web application with JWT authentication, we need several shared resources that are:

  1. Expensive to create - Database connection pools, Redis connections, and cryptographic keys take time to initialize.

  2. Used across many requests - Every authenticated request needs access to JWT keys, database, and Redis.

  3. Thread-safe and clonable - Web servers handle requests concurrently, so these resources must be safely shared.

The AppContext struct solves the above issues by bundling these resources together.

Create a context.rs module inside the src directory and add the following contents.

use jsonwebtoken::{DecodingKey, EncodingKey};
use redis::aio::MultiplexedConnection;
use sqlx::PgPool;

use crate::{
    config::{Config, RsaJwtConfig},
    error::Report,
};

#[derive(Clone)]
pub struct AppContext {
    pub config: Config,
    pub auth: AuthContext,
    pub db: PgPool,
    pub redis: MultiplexedConnection,
}

impl TryFrom<&Config> for AppContext {
    type Error = Report;

    fn try_from(config: &Config) -> Result<Self, Self::Error> {
        let db =
            tokio::runtime::Handle::current().block_on(async { config.database().pool().await });

        let auth = AuthContext {
            access: config.auth().access().try_into()?,
            refresh: config.auth().refresh().try_into()?,
        };
        let redis = tokio::runtime::Handle::current()
            .block_on(async { config.redis().multiplexed_connection().await })?;

        Ok(Self {
            config: config.clone(),
            db,
            auth,
            redis,
        })
    }
}

#[derive(Clone)]
pub struct AuthContext {
    pub access: JwtContext,
    pub refresh: JwtContext,
}

#[derive(Clone)]
pub struct JwtContext {
    pub encoding_key: EncodingKey,
    pub decoding_key: DecodingKey,
    pub exp: u64,
}

impl TryFrom<&RsaJwtConfig> for JwtContext {
    type Error = Report;

    fn try_from(config: &RsaJwtConfig) -> Result<Self, Self::Error> {
        let encoding_key = config.encoding_key()?;
        let decoding_key = config.decoding_key()?;

        let exp = config.exp() as u64;

        Ok(Self {
            encoding_key,
            decoding_key,
            exp,
        })
    }
}

Enter fullscreen mode Exit fullscreen mode

Understanding the AppContext.

Why Clone Trait? - Modern Rust web frameworks (like Axum or Actix) pass state to handlers by cloning. These types use Arc internally, making clones cheap (just incrementing the reference count rather than duplicating connections).

You might wonder why we use Handle::current().block_on() from tokio. That is because TryFrom is a synchronous trait but initialising the Database and Redis connections is asynchronous in nature. This conversion typically occurs once during application startup, before the async runtime begins serving requests. The one-time blocking cost is acceptable because; it only happens at startup (not per-request).

Conclusion

In the next chapter I will show you how to sign up, sign in and sign out users using the stateless JWTs.

This Series

Part 1: Project Setup & Configuration
Part 2: Implementing Logging
Part 3: Database Setup with SQLx and PostgreSQL (You are Here).
Next Part 4: Creating & Authenticating Users (Comming Soon)

Top comments (0)