DEV Community

wellallyTech
wellallyTech

Posted on

Scalable Wellness Data: Use the CQRS Pattern to Build Faster Health Dashboards

In the world of health and wellness applications, data is king. Users constantly log workouts, meals, and sleep, but they also expect to see beautiful, near-instant dashboards that track their progress over time.

Managing these two needs—frequent writes and complex analytical reads—often leads to performance bottlenecks in a traditional database. This architecture suggests a need for a more specialized approach to ensure your app stays responsive as it scales.

If you are looking for a deep dive into building these systems, you can find the technical walkthrough in our understanding your results guide.

The Problem: When One Database Isn't Enough

A typical wellness app handles two very different types of traffic. First, there are the simple "writes," such as logging a 10km run. Second, there are the "reads," like calculating a user's average running pace segmented by the day of the week over the last year.

When a single database tries to do both, it inevitably hits a limit. A model optimized for quick data entry (normalized) is often inefficient for complex analysis that requires joins and aggregations.

As your user base grows, these high-intensity analytical queries can grind your application to a halt. This is why many engineers turn to the Command Query Responsibility Segregation (CQRS) pattern.

The Solution: Splitting Responsibilities

CQRS is an architectural pattern that separates the models for writing and reading data. Instead of one model handling every request, the system is split into two distinct sides:

  • The Command Side: Handles state changes like writes, updates, and deletes.
  • The Query Side: Optimized purely for retrieving and displaying data.

By using a transactional database like PostgreSQL for logging and a columnar database like ClickHouse for analytics, you can ensure both fast logging and lightning-fast dashboards.

Command vs. Query: A Quick Comparison

To help visualize how these two sides differ in a health tech environment, refer to the table below:

Feature Command Side (Write) Query Side (Read)
Primary Goal Data Integrity & Consistency Speed & Analytical Performance
Database Example PostgreSQL ClickHouse
Data Structure Normalized (Clean tables) Denormalized (Flat, fast tables)
Optimized For LogWorkout, AddMeal MonthlyTrends, YearlyAverages

Connecting the System with Kafka

To keep these two sides in sync, we use an event bus like Kafka. When a user logs a workout, the record is saved to the PostgreSQL database, and an event is immediately published to Kafka.

The "Read" side then listens to these events and updates its own tables. This architecture is associated with eventual consistency, meaning there may be a sub-second delay before a logged workout appears on a dashboard.

This minor trade-off allows each part of the system to scale independently, ensuring that a surge in users checking their stats won't crash the part of the app responsible for recording their health data.

Key Takeaways for Scaling Analytics

  1. Specialization Wins: Use PostgreSQL for transactional integrity and ClickHouse for high-speed data crunching.
  2. Decouple with Events: Use Kafka to bridge the gap between your write and read models without slowing down the user experience.
  3. Optimize for the User: Denormalizing data on the read side allows for complex aggregations that run in milliseconds, not minutes.

For a complete step-by-step tutorial on implementing this stack with Node.js and Docker, check out WellAlly’s full guide.

Top comments (0)