MaxScale sits between your application and your MariaDB servers and acts as an intelligent proxy. It can route queries to different backends, balance load across replicas, handle failover and pool connections — all without changing a line of application code.
It is developed by MariaDB Corporation and is tightly integrated with MariaDB's replication and clustering features. But understanding what it can actually do for you requires looking at the specific problems it solves, not just the feature list.
Here are five real use cases where MaxScale earns its place in a production setup.
What MaxScale is and how it works
MaxScale is a database proxy daemon. It listens on a TCP port, accepts connections from clients, and forwards queries to one or more backend MariaDB servers based on routing rules you define.
The routing logic is handled by modules called routers. Each router implements a different strategy. You define which router to use in the MaxScale configuration file (maxscale.cnf), along with a list of backend servers and filters that can inspect or modify queries in transit.
The two most commonly used routers are:
| Router | What it does |
|---|---|
readwritesplit |
Sends writes to primary, reads to replicas |
readconnroute |
Routes each connection to one backend, used for load balancing |
schemarouter |
Routes queries based on schema name, useful for multi-tenant setups |
binlogrouter |
Acts as a replication relay, useful for fan-out replication |
kafkacdc |
Streams row-level changes to Kafka topics |
Most production setups use readwritesplit as the primary router. The others cover more specific needs covered in the use cases below.
Use case 1: Read/write splitting
This is the most common reason people set up MaxScale. A typical MariaDB replication setup has one primary that handles writes and one or more replicas that handle reads. The problem is that your application has to know which server to talk to for each type of query.
MaxScale with the readwritesplit router removes that from the application entirely. You connect to a single MaxScale endpoint, and it figures out where each query goes.
Write queries (INSERT, UPDATE, DELETE, CREATE, BEGIN) go to the primary. SELECT queries go to replicas. MaxScale tracks the state of replication and avoids sending reads to replicas that are lagging too far behind.
A minimal configuration looks like this:
[Read-Write-Service]
type=service
router=readwritesplit
servers=primary,replica1,replica2
user=maxscale
password=maxscale_password
max_slave_replication_lag=30
[Read-Write-Listener]
type=listener
service=Read-Write-Service
protocol=MariaDBClient
port=3306
The max_slave_replication_lag setting ensures that if a replica falls too far behind, MaxScale stops routing reads to it and sends them to the primary instead. This prevents stale reads in applications where data freshness matters.
One edge case worth knowing: by default, after a write, subsequent reads in the same session may still go to a replica. If your application reads immediately after writing and expects to see its own write, you need to enable causal_reads:
causal_reads=true
This adds a small overhead but guarantees read-your-writes consistency within a session.
Use case 2: Load balancing across read replicas
When you have multiple read replicas, distributing read traffic evenly matters. Without a proxy, you either hardcode server addresses in your application or use DNS round-robin, both of which are crude tools.
MaxScale's readwritesplit router does weighted load balancing across replicas by default, using the current connection count on each replica to decide where to send the next query. Replicas with fewer active connections get more traffic.
You can also assign explicit weights to replicas if they have different hardware capacities:
[replica1]
type=server
address=10.0.0.11
port=3306
priority=1
[replica2]
type=server
address=10.0.0.12
port=3306
priority=2
A higher priority value means the replica gets more connections routed to it.
For simpler setups — like a connection pooler in front of a single replica group — readconnroute with router_options=slave routes each new connection to a different replica in round-robin fashion. This works well for batch jobs or reporting queries that maintain long-lived connections.
Use case 3: Automatic failover and high availability
Without a proxy, a primary server failure requires either manual intervention or a custom script to promote a replica and update your application's connection strings. That window of downtime can stretch from seconds to minutes depending on how your setup is managed.
MaxScale includes a monitor module called mariadbmon that watches your servers, detects failures and can automatically promote a replica to primary without manual steps.
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=primary,replica1,replica2
user=maxscale
password=maxscale_password
monitor_interval=2000ms
auto_failover=true
auto_rejoin=true
With auto_failover=true, if the primary stops responding, MaxScale promotes the replica with the most up-to-date binlog position to primary and reconfigures routing automatically. Existing connections to the old primary are dropped, and new connections go to the new primary.
auto_rejoin=true handles the case where the old primary comes back online. Instead of creating a split-brain situation, MaxScale resets it and rejoins it as a replica under the new primary.
The monitor_interval controls how frequently MaxScale checks server health. At 2 seconds, failover is typically initiated within 4–6 seconds of the primary going down.
There are some caveats. Automatic failover works best when MaxScale has a clear picture of replication topology. If your replica is too far behind at the time of failure, MaxScale will wait (or refuse to promote it) to avoid data loss. You can control this with failover_timeout and replication_lag_error_threshold.
Use case 4: Query routing by schema or user
As databases grow, teams sometimes split workloads across multiple MariaDB instances. An analytics team might have their own replica with custom indexes. A multi-tenant application might store each tenant's data in a separate schema on a different server.
MaxScale's schemarouter handles this by routing queries based on which database or schema they target.
[Schema-Router-Service]
type=service
router=schemarouter
servers=shard1,shard2,shard3
user=maxscale
password=maxscale_password
When a client runs USE analytics_db, MaxScale checks which backend server has that schema and routes subsequent queries there. Cross-schema queries on different backends are not supported, so you need to design your schema boundaries carefully.
For user-based routing, MaxScale filters can redirect specific users to specific backends. An example: route all connections from a reporting user to a dedicated read replica, bypassing the default routing logic entirely.
| Routing strategy | Router module | Best for |
|---|---|---|
| Write to primary, reads to replicas | readwritesplit |
Standard replication setups |
| Round-robin across replicas | readconnroute |
Batch jobs, long-lived connections |
| Route by schema name | schemarouter |
Multi-tenant or sharded setups |
| Route by user or query regex |
readwritesplit + filter |
Analytics isolation, resource governance |
Use case 5: Connection pooling
Every new database connection carries overhead: a thread is allocated on the server, memory is reserved and authentication runs. At high connection rates — hundreds of app servers each opening a pool of 10–20 connections — this adds up.
MaxScale can pool connections between itself and the backend servers, while presenting a normal connection interface to clients. The client opens a connection to MaxScale, MaxScale reuses an existing backend connection from its pool, and releases it back when the client disconnects.
This is configured per service:
[Read-Write-Service]
type=service
router=readwritesplit
servers=primary,replica1
user=maxscale
password=maxscale_password
connection_keepalive=300s
max_connections=1000
MaxScale also includes a dedicated connection pooler called maxctrl for managing and inspecting the pool state at runtime, without restarting the service.
The benefit is most visible in environments where connection count is the bottleneck — typically applications with many short-lived requests, or microservice architectures where each service instance opens its own connection pool.
One thing to be aware of: MaxScale's connection pooling does not implement the same level of multiplexing as tools like ProxySQL or PgBouncer's transaction-mode pooling. If you need extremely aggressive connection reuse, those tools are worth evaluating. MaxScale's strength is routing intelligence, not raw connection density reduction.
A note on backups
MaxScale improves availability and routing, but it does not protect against data loss. A well-configured MariaDB backup strategy is what you need for that. Databasus is an industry-standard tool for MariaDB backups, used by both individual developers and larger teams — it handles scheduling, cloud storage integration and retention automatically.
Closing thoughts
MaxScale is a mature tool that solves real operational problems. Read/write splitting alone — even without failover or pooling — is valuable enough to justify deploying it in most replication setups. It removes backend topology details from application code and handles replication lag gracefully.
The failover capabilities are where it gets more interesting. Automatic promotion on primary failure, with automatic rejoin when the old primary recovers, covers the majority of common failure scenarios without custom scripts.
The main tradeoff is operational complexity. MaxScale is another process to run, monitor and configure. If your setup is a single primary with no replicas and no high availability requirement, it is not worth the overhead. But once you have replicas and care about read scalability or availability, it starts pulling its weight quickly.

Top comments (0)