Introduction
In part 0, we laid some solid background in building our proposed system. The system's structure, database schema, and other details were laid bare. This article builds on that foundation.
NOTE: The program isn't feature-complete yet! Contributions are welcome.
Source code
The source code for this series is hosted on GitHub via:
Sirneij / cryptoflow
A Q&A web application to demostrate how to build a secured and scalable client-server application with axum and sveltekit
CryptoFlow
CryptoFlow
is a full-stack web application built with Axum and SvelteKit. It's a Q&A system tailored towards the world of cryptocurrency!
I also have the application live. You can interact with it here. Please note that the backend was deployed on Render which:
Spins down a Free web service that goes 15 minutes without receiving inbound traffic. Render spins the service back up whenever it next receives a request to process. Spinning up a service takes up to a minute, which causes a noticeable delay for incoming requests until the service is back up and running. For example, a browser page load will hang temporarily.
Its building process is explained in this series of articles.
Implementation
Step 1: Cookies and user management
As stated in part 0, our system's authentication is cookie(session)-based. To allow this, we need to set it up. Right from the onset, we installed axum-extra (with cookie-private
and cookie
features enabled) and tower-http (cors
feature enabled). These will allow us to achieve our aims. Let's kick start by configuring our app to allow cookies and enable an origin (our frontend app) to directly access the application. To do this, we'll add a key
attribute to the app's state and make some configurations:
// backend/src/startup.rs
...
use axum::extract::FromRef;
...
#[derive(Clone)]
pub struct AppState {
...
key: axum_extra::extract::cookie::Key,
}
impl FromRef<AppState> for axum_extra::extract::cookie::Key {
fn from_ref(state: &AppState) -> Self {
state.key.clone()
}
}
...
async fn run(
listener: tokio::net::TcpListener,
store: crate::store::Store,
settings: crate::settings::Settings,
) {
let cors = tower_http::cors::CorsLayer::new()
.allow_credentials(true)
.allow_methods(vec![
axum::http::Method::OPTIONS,
axum::http::Method::GET,
axum::http::Method::POST,
axum::http::Method::PUT,
axum::http::Method::DELETE,
])
.allow_headers(vec![
axum::http::header::ORIGIN,
axum::http::header::AUTHORIZATION,
axum::http::header::ACCEPT,
])
.allow_origin(
settings
.frontend_url
.parse::<axum::http::HeaderValue>()
.unwrap(),
);
let app_state = AppState {
...
key: axum_extra::extract::cookie::Key::from(
std::env::var("COOKIE_SECRET")
.expect("Failed to get COOKIE_SECRET.")
.as_bytes(),
),
};
// build our application with a route
let app = axum::Router::new()
...
.layer(cors);
...
}
...
We allowed the basic HTTP methods, certain headers (particularly the authorization), and our frontend application via the tower-http CorsLayer
. We want our cookies to be very private so we'll be using axum_extra::extract:🍪:PrivateCookieJar which requires a key
for data encryption or to sign the cookies. The key can be generated via the axum_extra::extract::cookie::Key::generate()
but I chose to generate it from a 512-bit
(64 bytes) cryptographically random string which I saved in .env
. An example of this is:
// .env
COOKIE_SECRET=3bbefd8d24c89aefd3ad0b8b95afd2ea996e47b89d93d4090b481a091b4e73e5543305f2e831d0b47737d9807a1b5b5773dba3bbb63623bd42de84389fbfa3d1
To inform SignedCookieJar
how to access the key from our AppState
, we made this impl
:
...
impl FromRef<AppState> for axum_extra::extract::cookie::Key {
fn from_ref(state: &AppState) -> Self {
state.key.clone()
}
}
...
Now to user data management. For modularity, we'll have a users
submodule in the routes
module. The submodule will house all user-related handlers and will expose the Router
instance specific to user management alone. To achieve this, create two files, src/routes/users/mod.rs
and src/routes/users/login.rs
. The former is a special filename used for module organization (I assume you already know this) while the latter houses the handler for user login. Before we write their codes, we need to first write some utilities that will make the codes compile and functional. In the src/utils
folder, create a password.rs
file and make it look like this:
use argon2::{
password_hash::{rand_core::OsRng, PasswordHash, PasswordHasher, PasswordVerifier, SaltString},
Argon2,
};
#[tracing::instrument(name = "Hashing user password", skip(password))]
pub async fn hash_password(password: &[u8]) -> String {
let salt = SaltString::generate(&mut OsRng);
Argon2::default()
.hash_password(password, &salt)
.expect("Unable to hash password.")
.to_string()
}
#[tracing::instrument(name = "Verifying user password", skip(password, hash))]
pub fn verify_password(hash: &str, password: &[u8]) -> Result<(), argon2::password_hash::Error> {
let parsed_hash = PasswordHash::new(hash)?;
Argon2::default().verify_password(password, &parsed_hash)
}
There are two functions there:
-
hash_password
: This hashes a plain text using the argon2 hashing algorithm. We used the default settings of argon2 crate which usesv19
ofArgon2id
— memory cost is19 * 1024
(19 MiB), number of iterations is2
, and the degree of parallelism is1
. This is as recommended by OWASP Password Storage Cheat Sheet. The stringified result of this is what will be saved in thepassword
attribute of theusers
relation. -
verify_password
: A returning user will normally provide the plaintext password used for registration alongside the user's email. This password will need to be "compared" with the saved password to ensure that it's correct. To ascertain this correctness, this function was written.
Next in the list of utilities is error handling. This part was heavily influenced by this example provided by the axum team.
NOTE: The code is just part of the implementation. Check out this file for the complete code.
// src/utils/errors.rs
use crate::utils::CustomAppJson;
use argon2::password_hash::Error as ArgonError;
use axum::{
extract::rejection::JsonRejection,
http::StatusCode,
response::{IntoResponse, Response},
};
use serde::Serialize;
pub enum ErrorContext {
UnauthorizedAccess,
InternalServerError,
BadRequest,
NotFound,
}
pub enum CustomAppError {
JsonRejection(JsonRejection),
DatabaseQueryError(sqlx::Error),
PasswordHashError(ArgonError),
RedisError(bb8_redis::redis::RedisError),
UUIDError(uuid::Error),
Unauthorized(String),
InternalError(String),
BadRequest(String),
NotFound(String),
ReqwestError(reqwest::Error),
}
impl IntoResponse for CustomAppError {
fn into_response(self) -> Response {
// How we want error responses to be serialized
#[derive(Serialize)]
struct ErrorResponse {
message: String,
status_code: u16,
}
let (status, message) = match self {
CustomAppError::JsonRejection(rejection) => {
// This error is caused by bad user input so don't log it
tracing::error!("Bad user input: {:?}", rejection);
(rejection.status(), rejection.body_text())
}
CustomAppError::DatabaseQueryError(error) => {
match &error {
sqlx::Error::RowNotFound => {
tracing::error!("Resource not found: {}", error);
(
StatusCode::NOT_FOUND,
"Resource not found or you are not allowed to perform this operation"
.to_string(),
)
}
...
}
}
CustomAppError::PasswordHashError(error) => match error {
ArgonError::Password => {
tracing::info!("Password mismatch error");
(
StatusCode::BAD_REQUEST,
"Email and Password combination does not match.".to_string(),
)
}
...
},
...
};
(
status,
CustomAppJson(ErrorResponse {
message,
status_code: status.as_u16(),
}),
)
.into_response()
}
}
impl From<JsonRejection> for CustomAppError {
fn from(rejection: JsonRejection) -> Self {
Self::JsonRejection(rejection)
}
}
...
impl From<(String, ErrorContext)> for CustomAppError {
fn from((message, context): (String, ErrorContext)) -> Self {
match context {
ErrorContext::UnauthorizedAccess => CustomAppError::Unauthorized(message),
ErrorContext::InternalServerError => CustomAppError::InternalError(message),
ErrorContext::BadRequest => CustomAppError::BadRequest(message),
ErrorContext::NotFound => CustomAppError::NotFound(message),
}
}
}
It's some straightforward code that allows many of the expected errors (from SQLx
, bb8_redis
, argon2
, uuid
and others) to be gracefully handled.
For data extraction from requests' bodies, I also made a simple JSON
extractor (for now) in src/utils/responses.rs
:
use axum::{
extract::FromRequest,
http::StatusCode,
response::{IntoResponse, Response},
};
use crate::utils::CustomAppError;
use serde::Serialize;
#[derive(FromRequest)]
#[from_request(via(axum::Json), rejection(CustomAppError))]
pub struct CustomAppJson<T>(pub T);
impl<T> IntoResponse for CustomAppJson<T>
where
axum::Json<T>: IntoResponse,
{
fn into_response(self) -> Response {
axum::Json(self.0).into_response()
}
}
...
It does what axum::Json
would do with an extra.
The next utility functions will be creating and retrieving (for now) users from the database. To write them, we will remember the Store
struct we implemented in part 0. We will extend it by adding methods that will facilitate those operations:
// src/store/users.rs
use sqlx::Row;
impl crate::store::Store {
#[tracing::instrument(name = "get_user_by_id", fields(user_id = id.to_string()))]
pub async fn get_user_by_id(&self, id: uuid::Uuid) -> Result<crate::models::User, sqlx::Error> {
sqlx::query_as::<_, crate::models::User>(
r#"
SELECT
id,
email,
password,
first_name,
last_name,
is_active,
is_staff,
is_superuser,
thumbnail,
date_joined
FROM users
WHERE id = $1 AND is_active = true
"#,
)
.bind(id)
.fetch_one(&self.connection)
.await
}
#[tracing::instrument(name = "get_user_by_email", fields(user_email = email))]
pub async fn get_user_by_email(&self, email: &str) -> Result<crate::models::User, sqlx::Error> {
sqlx::query_as::<_, crate::models::User>(
r#"
SELECT
id,
email,
password,
first_name,
last_name,
is_active,
is_staff,
is_superuser,
thumbnail,
date_joined
FROM users
WHERE email = $1 AND is_active = true
"#,
)
.bind(email)
.fetch_one(&self.connection)
.await
}
#[tracing::instrument(name = "create_user", skip(password), fields(user_first_name = first_name, user_last_name = last_name, user_email = email))]
pub async fn create_user(
&self,
first_name: &str,
last_name: &str,
email: &str,
password: &str,
) -> Result<crate::models::UserVisible, sqlx::Error> {
sqlx::query_as::<_, crate::models::UserVisible>(
r#"
INSERT INTO users (first_name, last_name, email, password)
VALUES ($1, $2, $3, $4)
RETURNING
id, email, first_name, last_name, is_active, is_staff, is_superuser, thumbnail, date_joined
"#
)
.bind(first_name)
.bind(last_name)
.bind(email)
.bind(password)
.fetch_one(&self.connection)
.await
}
#[tracing::instrument(name = "activate_user", fields(user_id = id.to_string()))]
pub async fn activate_user(&self, id: &uuid::Uuid) -> Result<(), sqlx::Error> {
sqlx::query(
r#"
UPDATE users
SET is_active = true
WHERE id = $1
"#,
)
.bind(id)
.execute(&self.connection)
.await?;
Ok(())
}
#[tracing::instrument(name="create_super_user_in_db.", skip(settings), fields(user_email = settings.superuser.email, user_first_name = settings.superuser.first_name, user_last_name = settings.superuser.last_name))]
pub async fn create_super_user_in_db(&self, settings: &crate::settings::Settings) {
let new_super_user = crate::models::NewUser {
email: settings.superuser.email.clone(),
password: crate::utils::hash_password(&settings.superuser.password.as_bytes()).await,
first_name: settings.superuser.first_name.clone(),
last_name: settings.superuser.last_name.clone(),
};
match sqlx::query(
"INSERT INTO users
(email, password, first_name, last_name, is_active, is_staff, is_superuser)
VALUES ($1, $2, $3, $4, true, true, true)
ON CONFLICT (email)
DO UPDATE
SET
first_name=EXCLUDED.first_name,
last_name=EXCLUDED.last_name
RETURNING id",
)
.bind(new_super_user.email)
.bind(&new_super_user.password)
.bind(new_super_user.first_name)
.bind(new_super_user.last_name)
.map(|row: sqlx::postgres::PgRow| -> uuid::Uuid { row.get("id") })
.fetch_one(&self.connection)
.await
{
Ok(id) => {
tracing::info!("Super user created successfully {:#?}.", id);
id
}
Err(e) => {
tracing::error!("Failed to insert user into DB: {:#?}.", e);
uuid::Uuid::new_v4()
}
};
}
}
We impl
the Store
struct so we can use its connection
attribute to talk to the database directly via the SQLx
crate. The methods peculiar to user management are the get_user_by_id
, get_user_by_email
, activate_user
, and create_super_user_in_db
for now. They all mean what their names imply. The last one is an administrative method that creates a user with "superpowers". It will be used in the build
method later on. All of these methods referenced different data models (typically struct
s) that we've not defined yet. Let's define them in src/models/users.rs
:
// src/models/users.rs
#[derive(serde::Serialize, Debug, sqlx::FromRow)]
pub struct User {
pub id: uuid::Uuid,
pub email: String,
pub password: String,
pub first_name: String,
pub last_name: String,
pub is_active: Option<bool>,
pub is_staff: Option<bool>,
pub is_superuser: Option<bool>,
pub thumbnail: Option<String>,
pub date_joined: time::OffsetDateTime,
}
#[derive(serde::Serialize, Debug, sqlx::FromRow)]
pub struct UserVisible {
pub id: uuid::Uuid,
pub email: String,
pub first_name: String,
pub last_name: String,
pub is_active: Option<bool>,
pub is_staff: Option<bool>,
pub is_superuser: Option<bool>,
pub thumbnail: Option<String>,
pub date_joined: time::OffsetDateTime,
}
#[derive(serde::Serialize)]
pub struct LoggedInUser {
pub id: uuid::Uuid,
pub email: String,
pub password: String,
pub is_staff: bool,
pub is_superuser: bool,
}
#[derive(serde::Deserialize, Debug)]
pub struct NewUser {
pub email: String,
pub password: String,
pub first_name: String,
pub last_name: String,
}
#[derive(serde::Deserialize, Debug)]
pub struct LoginUser {
pub email: String,
pub password: String,
}
#[derive(serde::Deserialize, Debug)]
pub struct ActivateUser {
pub id: uuid::Uuid,
pub token: String,
}
They are just struct
s that derived from (or implemented) serde
's Serialize
and/or Deserialize
. Those that derived Deserialize
are used for incoming requests while those that derived Serialize
are going to be used as requests' responses. Most of them derived Debug
too so that they can be logged. Two implemented sqlx::FromRow
. This is to allow passing them in sqlx::query_as
. It is a requirement. For this to work, ensure that the data returned by the SQL statement have the same name as the attributes in the struct.
Having paved the way, let's write the login handler:
// src/routes/users/login.rs
use crate::models::LoginUser;
use crate::startup::AppState;
use crate::utils::verify_password;
use crate::utils::SuccessResponse;
use crate::utils::{CustomAppError, CustomAppJson, ErrorContext};
use axum::{extract::State, http::StatusCode, response::IntoResponse};
use axum_extra::extract::cookie::{Cookie, PrivateCookieJar, SameSite};
use time::Duration;
#[axum::debug_handler]
#[tracing::instrument(name = "login_user", skip(cookies, state, login))]
pub async fn login_user(
cookies: PrivateCookieJar,
State(state): State<AppState>,
CustomAppJson(login): CustomAppJson<LoginUser>,
) -> Result<(PrivateCookieJar, impl IntoResponse), CustomAppError> {
// Get user from db by email
let user = state
.db_store
.get_user_by_email(&login.email)
.await
.map_err(|_| {
CustomAppError::from((
"Invalid email or password".to_string(),
ErrorContext::BadRequest,
))
})?;
// Verify password
tokio::task::spawn_blocking(move || {
verify_password(&user.password, &login.password.as_bytes())
})
.await
.map_err(|_| {
CustomAppError::from((
"Server error occurred".to_string(),
ErrorContext::InternalServerError,
))
})?
.map_err(|_| {
CustomAppError::from((
"Invalid email or password".to_string(),
ErrorContext::BadRequest,
))
})?;
// Generate a truly random session id for the user
let session_id = uuid::Uuid::new_v4().to_string();
// Save session id in redis
let mut redis_con = state.redis_store.get().await.map_err(|_| {
CustomAppError::from((
"Failed to connect to session store".to_string(),
ErrorContext::InternalServerError,
))
})?;
let settings = crate::settings::get_settings().map_err(|_| {
CustomAppError::from((
"Failed to read settings".to_string(),
ErrorContext::InternalServerError,
))
})?;
let cookie_expiration = settings.secret.cookie_expiration;
bb8_redis::redis::cmd("SET")
.arg(session_id.clone())
.arg(user.id.to_string())
.arg("EX")
.arg(cookie_expiration * 60)
.query_async::<_, String>(&mut *redis_con)
.await
.map_err(|_| {
CustomAppError::from((
"Failed to save session".to_string(),
ErrorContext::InternalServerError,
))
})?;
// Create cookie
let cookie = Cookie::build(("sessionid", session_id))
.secure(true)
.same_site(SameSite::Strict)
.http_only(true)
.path("/")
.max_age(Duration::minutes(cookie_expiration));
Ok((
cookies.add(cookie),
SuccessResponse {
message: "The authentication process was successful.".to_string(),
status_code: StatusCode::OK.as_u16(),
}
.into_response(),
))
}
Though long due to error handling, the concept is simple. The handler takes the PrivateCookieJar
extractor (to extract and help propagate user cookies), AppState
extractor (to help hold the AppState
data for the handler to use), and the CustomAppJson
extractor (to extract the request body). The last argument must be positioned because it consumes the request body. If not, the code will not compile! It returns a Result
of the tuple (PrivateCookieJar
, impl IntoResponse
) or CustomAppError
. For PrivateCookieJar
to work, its "value must be returned from the handler as part of the response for the changes to be propagated". In the function body, we first tried to retrieve the requesting user from the database via the email address. Failure to get the user leads to an error being returned. This is where our efforts so far start to shine. Otherwise, we proceeded to verify the user's password. Verification of hashed passwords is CPU-intensive and can block the async runtime. To mitigate this, we spawned a tokio task so that the operation wouldn't be blocked. Next, every user should have a truly random and unique session identification. uuid
came to the rescue! Since we need to store this session somewhere for subsequent validations as long as the session lives, we chose to store it in redis. Another option is to store it in the PostgreSQL database but this will be slower. A simple key/value store like redis is perfect! The normal Rust's redis crate doesn't support pooling which is important for a system that will serve a lot of traffic. This made me opt for bb8-redis which provides async
and tokio-based redis connection pool. We will set it later but for now, we are already using it to store the session_id
as the key and user_id
as the value. After that, we built a cookie which encrypts the generated session_id
. The cookie is made secure and imposes strict same-site attributes. Of course, we made the cookie HttpOnly to prevent client-side scripts from accessing its embedded data. We used a global path for it and using a configurable expiration period, we set the cookie's max_age
. As previously stated, it is a requirement to return the cookie as part of the HTTP response.
Next up, let's route this handler:
// src/routes/users/mod.rs
use axum::{routing::post, Router};
mod login;
pub fn users_routes() -> Router<crate::startup::AppState> {
Router::new()
.route("/login", post(login::login_user))
}
users_routes
will build all routes related to user management. Since the route uses the AppState
, the router returned must specify it. Ensure you make users_routes
available by exporting it in the main routes/mod.rs
.
To conclude this long journey, let's set up bb8-redis connection and include users_routes
to the main route instance!
// src/startup.rs
...
#[derive(Clone)]
pub struct AppState {
...
pub redis_store: bb8_redis::bb8::Pool<bb8_redis::RedisConnectionManager>,
}
...
impl Application {
...
pub async fn build(
settings: crate::settings::Settings,
test_pool: Option<sqlx::postgres::PgPool>,
) -> Result<Self, std::io::Error> {
...
sqlx::migrate!()
.run(&store.clone().connection)
.await
.expect("Failed to migrate");
// Create superuser if not exists
store.create_super_user_in_db(&settings).await;
...
}
}
async fn run(
listener: tokio::net::TcpListener,
store: crate::store::Store,
settings: crate::settings::Settings,
) {
let redis_url = std::env::var("REDIS_URL").expect("Failed to get REDIS_URL.");
let manager =
bb8_redis::RedisConnectionManager::new(redis_url).expect("Failed to create redis manager");
let redis_pool = bb8_redis::bb8::Pool::builder()
.max_size(15)
.build(manager)
.await
.expect("Failed to create redis pool.");
...
let app_state = AppState {
...
redis_store: redis_pool,
};
// build our application with a route
let app = axum::Router::new()
...
.nest("/api/users", routes::users_routes(app_state.clone()))
...;
...
}
...
Since we want to automatically create the user with "superpowers", we called the utility method for doing that in the build
method. In the run
function, we retrieved our machine's redis instance's URL from the .env
file and created a new bb8 RedisConnectionManager
from it. From the connection manager, we built a 15-connection pool and added it to the application state. A nice improvement is to make the number of connection pools configurable. Lastly, we added our users_routes
to the main route using the nest
method. It helps make our routing composable!
Let's stop here for this part. We'll continue in the next article!
Outro
Enjoyed this article? I'm a Software Engineer and Technical Writer actively seeking new opportunities, particularly in areas related to web security, finance, health care, and education. If you think my expertise aligns with your team's needs, let's chat! You can find me on LinkedIn: LinkedIn and Twitter: Twitter.
If you found this article valuable, consider sharing it with your network to help spread the knowledge!
Top comments (0)