Introduction
Moving on from where we stopped, we'll be focusing on setting up our database (PostgreSQL) and redis connections here. We'll leverage the awesome rust ecosystem and previous setup to make this seamless. Redis is needed to help us forcefully destroy some tokens and to efficiently store our session cookies later on.
Source code
The source code for this series is hosted on GitHub via:
rust-auth
A full-stack secure and performant authentication system using rust's Actix web and JavaScript's SvelteKit.
This application resulted from this series of articles and it's currently live here(I have disabled the backend from running live).
Run locally
You can run the application locally by first cloning it:
~/$ git clone https://github.com/Sirneij/rust-auth.git
After that, change directory into each subdirectory: backend
and frontend
in different terminals. Then following the instructions in each subdirectory to run them.
Implementation
You can get the overview of the code for this article on github.
Step 1: Create a users
submodule in the routes
module
This step is not really that relevant here but so that we can still feel focused, I decided to add it here. Let's prepare our application for the task ahead. In the src/routes
folder, create a subfolder and name it users
. Make the new folder a module:
~/rust-auth/backend$ mkdir src/routes/users && touch src/routes/users/mod.rs src/routes/users/register.rs
Open up src/routes/users/mod.rs
:
// src/routes/users/mod.rs
mod register;
Then link the newly created module to its parent module:
// src/routes/mod.rs
...
mod users;
Let's leave that there and focus on this article's main business — linking our application up with a database and redis.
Step 2: Install database connection dependencies and set them up
As discussed in the previous article, we'll be using SQLx to asynchronously interact with the database. It's not an ORM so we'll be using raw SQL which, though maybe tedious, will allow us to learn the good old SQL alongside. If you need ORM, you can check out diesel. Let's install SQLx:
~/rust-auth/backend$ cargo add sqlx --features runtime-actix-rustls,postgres,uuid,chrono,migrate
We are activating runtime-actix-rustls
(since we use actix-web), postgres
(our Database of choice), uuid
(for IDs), chrono
(to support rust chrono
crate used for date and time), and migrate
(to allow automatic migration) features. What do we want to migrate? We'll be migrating some SQL tables. To start out, let's create a migrations
folder at the root of our application. That's the default path it should be located. However, you can change this. Whenever you run a migration, this folder will be checked to know whether or not some database stuff has been altered.
~/rust-auth/backend$ mkdir migrations
Now, using SQLx CLI, we can generate .sql
files for our tables in the migrations
folder:
~/rust-auth/backend$ sqlx migrate add -r users_table
You should now see two .sql
files generated. The flag -r
makes creating reversible migrations with corresponding "up" and "down" scripts possible. If the above command doesn't work, don't worry, we'll make it work.
Let's add some credentials to settings/base.yaml
file for our DB and redis connections:
# settings/base.yaml
...
database:
username: "<your_db_username>"
password: "<your_db_password>"
port: <your_db_port>
host: "<your_db_host>"
database_name: "<your_db_name>"
require_ssl: false
redis:
uri: "<your_redis_uri>"
pool_max_open: 16
pool_max_idle: 8
pool_timeout_seconds: 1
pool_expire_seconds: 60
Some basic settings which you need to supply for your local setup of PostgreSQL and Redis. Proceeding to src/settings.rs
:
// src/settings.rs
use sqlx::ConnectOptions;
/// Global settings for exposing all preconfigured variables
#[derive(serde::Deserialize, Clone)]
pub struct Settings {
pub application: ApplicationSettings,
pub debug: bool,
pub database: DatabaseSettings,
pub redis: RedisSettings,
}
...
/// Redis settings for the entire app
#[derive(serde::Deserialize, Clone, Debug)]
pub struct RedisSettings {
pub uri: String,
pub pool_max_open: u64,
pub pool_max_idle: u64,
pub pool_timeout_seconds: u64,
pub pool_expire_seconds: u64,
}
/// Database settings for the entire app
#[derive(serde::Deserialize, Clone)]
pub struct DatabaseSettings {
pub username: String,
pub password: String,
pub port: u16,
pub host: String,
pub database_name: String,
pub require_ssl: bool,
}
impl DatabaseSettings {
pub fn connect_to_db(&self) -> sqlx::postgres::PgConnectOptions {
let ssl_mode = if self.require_ssl {
sqlx::postgres::PgSslMode::Require
} else {
sqlx::postgres::PgSslMode::Prefer
};
let mut options = sqlx::postgres::PgConnectOptions::new()
.host(&self.host)
.username(&self.username)
.password(&self.password)
.port(self.port)
.ssl_mode(ssl_mode)
.database(&self.database_name);
options.log_statements(tracing::log::LevelFilter::Trace);
options
}
}
We added RedisSettings
and DatabaseSettings
to our settings file and effected the change in the global Settings
struct. We also implemented a method, connect_to_db
, for DatabaseSettings
so that we can easily connect to our database using the credentials provided. It's time to integrate these settings with our application. Open up src/startup.rs
:
// src/startup.rs
...
pub async fn build(
settings: crate::settings::Settings,
+ test_pool: Option<sqlx::postgres::PgPool>,
) -> Result<Self, std::io::Error> {
+ let connection_pool = if let Some(pool) = test_pool {
+ pool
+ } else {
+ get_connection_pool(&settings.database).await
+ };
+ sqlx::migrate!()
+ .run(&connection_pool)
+ .await
+ .expect("Failed to migrate the database.");
let address = format!(
"{}:{}",
settings.application.host, settings.application.port
);
let listener = std::net::TcpListener::bind(&address)?;
let port = listener.local_addr().unwrap().port();
- let server = run(listener).await?;
+ let server = run(listener, connection_pool, settings).await?;
Ok(Self { port, server })
}
...
+ pub async fn get_connection_pool(
+ settings: &crate::settings::DatabaseSettings,
+ ) -> sqlx::postgres::PgPool {
+ sqlx::postgres::PgPoolOptions::new()
+ .acquire_timeout(std::time::Duration::from_secs(2))
+ .connect_lazy_with(settings.connect_to_db())
+ }
async fn run(
listener: std::net::TcpListener,
+ db_pool: sqlx::postgres::PgPool,
+ settings: crate::settings::Settings,
) -> Result<actix_web::dev::Server, std::io::Error> {
+ // Database connection pool application state
+ let pool = actix_web::web::Data::new(db_pool);
+ // Redis connection pool
+ let cfg = deadpool_redis::Config::from_url(settings.clone().redis.uri);
+ let redis_pool = cfg
+ .create_pool(Some(deadpool_redis::Runtime::Tokio1))
+ .expect("Cannot create deadpool redis.");
+ let redis_pool_data = actix_web::web::Data::new(redis_pool);
let server = actix_web::HttpServer::new(move || {
actix_web::App::new().service(crate::routes::health_check)
actix_web::App::new()
.service(crate::routes::health_check)
+ // Add database pool to application state
+ .app_data(pool.clone())
+ // Add redis pool to application state
+ .app_data(redis_pool_data.clone())
})
.listen(listener)?
.run();
Ok(server)
}
We created a new function, get_connection_pool
, that really and lazily connects our application to the DB and then returns such connection for the app's use. Our run
function was extended to take more parameters, such as the pool returned by the previously explained function. Since many endpoints will need access to the DB (and redis — to be created) pool, we need to make it available app-wide. To do this, actix-web provides an extractor, actix_web::web::Data<T>
, to help share the application state "with all routes and resources within the same scope". We then used this API to create pool
and redis_pool_data
which were then attached to the application via App::pp_data()
. For the build
method, we also extended it to allow an optional argument, test_pool
, which will be used when tests are being run. We also allowed automatic migration of the DB using the sqlx::migrate
macro. If you created your migrations
folder in a different place other than the root directory, you must pass the path to such a folder in this macro. Before we install deadpool-redis, let's update our src/main.rs
one last time:
#[tokio::main]
async fn main() -> std::io::Result<()> {
dotenv::dotenv().ok();
let settings = backend::settings::get_settings().expect("Failed to read settings.");
let subscriber = backend::telemetry::get_subscriber(settings.clone().debug);
backend::telemetry::init_subscriber(subscriber);
- let application = backend::startup::Application::build(settings).await?;
+ let application = backend::startup::Application::build(settings, None).await?;
tracing::event!(target: "backend", tracing::Level::INFO, "Listening on http://127.0.0.1:{}/", application.port());
application.run_until_stopped().await?;
Ok(())
}
Since this is the real app, we set test_pool
to None
. Now, install deadpool-redis:
~/rust-auth/backend$ cargo add deadpool-redis
Step 3: Create some DB tables for users
If this:
~/rust-auth/backend$ sqlx migrate add -r users_table
failed before, you can rerun it now. It's time to write some SQL. Open up migrations/*_users_table.up.sql
:
-- migrations/*_users_table.up.sql
-- Add up migration script here
-- User table
CREATE TABLE IF NOT EXISTS users(
id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE,
password TEXT NOT NULL,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL,
is_active BOOLEAN DEFAULT FALSE,
is_staff BOOLEAN DEFAULT FALSE,
is_superuser BOOLEAN DEFAULT FALSE,
thumbnail TEXT NULL,
date_joined TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS users_id_email_is_active_indx ON users (id, email, is_active);
-- Create a domain for phone data type
CREATE DOMAIN phone AS TEXT CHECK(
octet_length(VALUE) BETWEEN 1
/*+*/
+ 8 AND 1
/*+*/
+ 15 + 3
AND VALUE ~ '^\+\d+$'
);
-- User details table (One-to-one relationship)
CREATE TABLE user_profile (
id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL UNIQUE,
phone_number phone NULL,
birth_date DATE NULL,
github_link TEXT NULL,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS users_detail_id_user_id ON user_profile (id, user_id);
As a person who has some Django experience, I tend to like its default User model and the users
table models that except the replacement of username
by email
. I also like using UUID
for my primary key.
From the SQL codes, we have two simple tables — users
and user_profile
. user_profile
has a one-to-one
relationship with the users
table since a user can only have one profile. We also created a custom datatype, phone
, using SQL's DOMAIN
. This allows us to give some constraints to any text that will be stored as phone number — E.164 standard was used. Database indexes were also created.
That's it for this article. See you soon.
Outro
Enjoyed this article? I'm a Software Engineer and Technical Writer actively seeking new opportunities, particularly in areas related to web security, finance, healthcare, and education. If you think my expertise aligns with your team's needs, let's chat! You can find me on LinkedIn and Twitter.
If you found this article valuable, consider sharing it with your network to help spread the knowledge!
Top comments (0)