DEV Community

Cover image for AWS re:Invent 2025 - Detecting falls in aged care with a minimum lovable product (DEV337)
Kazuya
Kazuya

Posted on • Edited on

AWS re:Invent 2025 - Detecting falls in aged care with a minimum lovable product (DEV337)

🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.

Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!

Overview

đź“– AWS re:Invent 2025 - Detecting falls in aged care with a minimum lovable product (DEV337)

In this video, the presenter shares how they built a fall detection system for aged care facilities using AWS IoT Core, inspired by their grandmother's fall incident. The session covers the evolution from MVP to MLP, starting with a Grafana dashboard connected to Athena and CloudWatch, then scaling to multi-tenant architecture. Key topics include enriching event payloads with metadata, implementing data partitioning in S3 using Firehose dynamic partitions with non-Hive format for better scalability, and building real-time escalation using SNS, SQS, Lambda, DynamoDB streams, EventBridge Pipes, and Step Functions. The presenter emphasizes using CloudWatch Metric Filters for custom metrics, understanding batch versus real-time processing needs, and the importance of event enrichment for debugging. The solution uses millimeter wave radar devices to detect falls through XY and Z coordinates, providing faster response times than traditional aged care technology like pressure mats and bed alarms.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

From Personal Tragedy to Innovation: Addressing Falls in Aged Care with Millimeter Wave Radar

Hello. Thank you for choosing to attend my session. I'm going to start with a little story. About ten years ago, my grandmother was living in independent living, and she had a fall in the middle of the night. No one got to her until six to eight hours afterwards. She broke her shoulder and was never the same again. She had a button on the wall to ask for help, but obviously if you've fallen, you can't press the button and ask for help. That's part of the inspiration for my startup, which I'm going to talk about today.

Thumbnail 60

We're going to talk about the problem we're solving, which I've already given you an overview of. We're going to go through requirements and persona gathering to understand who cares about this sort of product. We're going to talk about minimum viable products and minimum lovable products—are they the same thing? Well, let's find out. We'll also cover enriching events. It's all about data and events. You need to know where these events come from and which systems they've passed through. We'll discuss data partitioning, cover a bit of real-time escalation, and finish with some key takeaways.

Thumbnail 80

The typical aged care room basically has some technology, but it's pretty antiquated. You've got pressure floor mats for identifying when people are walking in certain areas of the room or falling out of bed. You've got bed alarms—if someone sits up in bed or falls out of bed, you need someone there pretty quickly to look after them. There are also infrared switches that can be put on doors to know if someone's gone into a bathroom or elsewhere. The problem with all this technology is that it's old and can break down, and these are very expensive things to put into an aged care facility.

Thumbnail 130

Thumbnail 150

So how can technology help? I've drawn a diagram of an aged care room. They're all similar in layout. You've got a bed and a bathroom. Most falls happen at night, and it's the highest rate of injuries for elderly people over sixty. The technology we're going to put in is a millimeter wave radar device. It looks like a little smoke alarm and goes in the roof. It emits radar that bounces off bodies and objects in the room. We can identify the XY coordinates of where that person is, how many people are in the room, and the Z coordinates—the center of mass. So you know if they're up or if they've fallen. There's machine learning on board which can send an event to determine whether someone has actually fallen and can get help.

Thumbnail 190

Thumbnail 220

Understanding your personas is important. You have two personas in an aged care facility. There's the facility manager who cares about reporting and prioritization of care at the right place at the right time. They also need to record falls for compliance purposes in most countries. A nurse needs to understand when someone has fallen and when they need help, and again, prioritization of care. Both roles have requirements, but most of the facility manager's needs can be done by batch processing. All the other ones need to be near real-time because as soon as someone falls, someone needs help, and you need to get to them. The current technology doesn't allow for that, but ours does.

Thumbnail 230

Thumbnail 260

Building the MVP: AWS IoT Core Architecture and Multi-Tenant Data Partitioning

AWS IoT Core is a key piece of this infrastructure. The device talks to IoT Core to send events. What is IoT Core? It's basically an MQTT and MQTT WebSocket compliant endpoint that's managed. You don't have to worry about managing your own MQTT endpoint—it provides all that for you. An IoT device publishes an event to this endpoint. There are rules that you can configure on how to deal with events as they come in, and then you have actions which give you ways to process these events.

Thumbnail 290

An event payload looks like this. You have a device ID, you have a location in the room—the X, Y, and Z coordinates in relation to the device. You get an event name, what the event was, the time it happened, and the type of event. A minimum viable product is when you spend the minimum amount of time coding something.

Thumbnail 330

I used a Grafana dashboard, so I needed to get on board with a vendor who provides this device to make sure I could be a partner. To do that, I need to show I understand their products and I can query the data. This Grafana dashboard is literally just wired up to Athena and CloudWatch at the back end. CloudWatch for the graphs, and there's some reporting from Athena. It's very easy to wire up Grafana to AWS services, and it's a very easy way to show a dashboard.

Thumbnail 360

Thumbnail 370

I also did a little demo video where my son features in that. He gets up out of bed and falls, and then there's an alert that comes through. All of that was done via Grafana dashboard and the Lambda function at the back end. I like architecture diagrams, so this is the original architecture diagram of what I built. You can see that the customer device talks to an IoT endpoint and goes through a transform Lambda function to send an alert. There's also CloudWatch logs where the alerts are going to CloudWatch logs with a JSON payload and something not as well-known: you can have CloudWatch log metrics based on these logs which don't cost very much and have your own custom metric filters, which is very useful.

Thumbnail 410

Thumbnail 430

That way I can get those graphs of people being in bed, custom things like people in the room or out of the room. There's also persisting the event for querying from Athena, which goes via Firehose to S3. The Grafana dashboard is consuming all these things, and there's a little database to say who should get alerts. That's basically my MVP. An MVP allowed us to understand the data and be able to query it intelligently, even answering business questions. I can show it to customers and ask what sorts of things they want to see, which helps that dialogue and discussion with potential customers.

Thumbnail 440

Thumbnail 450

Thumbnail 460

Thumbnail 470

However, there are some issues. In this case, the data is only set up for a single tenant. I can't scale this, and I can't scale Grafana dashboards. So I need to solve those problems. One of those problems was adding tenant context to that payload. Because the device doesn't understand the concept of tenants, it understands itself as a device, but I need to put it somewhere. So I had to grab a Lambda function to enrich that payload. The data goes in, it gets enriched in a Lambda function, and I'll show you an example of the enrichment in the next slide.

Thumbnail 500

Thumbnail 510

Thumbnail 550

That republishes to a different MQTT topic path which has the tenants at the beginning of that path, so I can easily query and see all the events coming through for the right tenant. Then a rule can grab that, and I've got a separate Firehose for each tenant, keeping all tenant data separate. I've also got an SNS as part of that to be able to do real-time processing for those events that I care about. This is an example of querying the event stream when it comes in. Topic 4 in this case is basically you count from the slashes, so before the slash is 1, the first plus is the tenant ID. Devices is number 3, and then after the device is a device ID, so that's topic 4. That gets put in as a device ID in my payload, and then you can see I've got a Firehose delivery stream there for topic 2, which is the tenant ID. So I've got one Firehose for every tenant, keeping the data separate.

Thumbnail 570

Thumbnail 590

Here's an example of an enriched event. You can add a meta key or something and add to that as the data goes through your pipeline. So it gives it time, where it got enriched, some trace IDs, and so on. When you're partitioning this data in S3, there are two ways to do that. There's the Hive style where you label the partition, so year equals 2025. That's the traditional way of partitioning and the standard way. There's also non-Hive, where you're doing partition projection. Examples of that are CloudTrail and ALB logs, where a lot of AWS services use that second method. Basically, the difference between the two is that if you're doing Hive style, you have to manually add that partition when you go into a new day or new month, and you need to make sure it's scanning that new partition.

Thumbnail 650

With partition projection, you can actually configure Athena with information about what your partitions are, whether they're dates or whatever, and it can work that out for you. You don't have to scan it, and you don't have to add partitions—it's all added for you. The second method is the best way to do it at scale, so you don't have to worry about missing partitions and missing data.

Thumbnail 660

Thumbnail 680

Firehose can do dynamic partitions. I just pull the year, month, day, and device ID out of my payload, so any JSON payload that goes through Firehose, you can pull out that information and use that as part of your partitioning. I tell it the path where I want to store my data. After doing that, I've got all the presence alerts for someone walking around, you've got a date, I've got a device, and then I've got data. So I can query by device, by date, very easily.

Thumbnail 700

To be able to query that non-Hive partition format, you just put a configuration into Athena in the create table statement and you tell it what the format of the partition is—it's days, valid dates, all that sort of thing—and then it knows the rest. You don't have to do any more configuration.

Thumbnail 720

Thumbnail 740

Real-Time Falls Escalation and the Journey from MVP to Minimum Lovable Product

The final part of the solution is falls escalation. That's a key point because you need to be able to escalate to different levels of people to make sure that someone gets to the person who's fallen. The falls alert will go to an SNS event topic, and you can actually have a queue that listens to a particular filter policy. In this case, my payload type is a fall, so I only care about fall alerts going to my queue. The SNS topic goes to a queue, then to a Lambda function, and then to a DynamoDB table.

Thumbnail 760

I have a listener on the DynamoDB table, which is an Event Bridge pipe. One of the good uses of pipes is that you used to have to use Lambda functions for this. In this case, you can configure the pipe to listen to the stream. You can do some enrichment—I'm not doing any enrichment in this case—but I'm kicking off a Step Functions flow, so it's a good way to wire these things up.

Thumbnail 770

Thumbnail 810

The Step Functions flow looks like this. About six months ago, AWS updated the Step Functions console to provide a visual way of seeing your different steps and your Step Functions flow, and that's a nice visual way of seeing my path. The data goes through the first level, there's a delay, and then it goes to the second level, and then there's a delay and it goes to the third level. If someone acknowledges that alert, the Step Functions will finish. It checks to see if someone's acknowledged it first.

Thumbnail 820

Thumbnail 850

Originally it was a Grafana dashboard. The last few months I've been building out the website, and this is the settings screen which basically shows the front end user interface for the escalation flow. There's reporting via an activity timeline so you can see if someone's in bed, in a chair, and you can get lots of dates and times of people moving around. This is very useful data, especially in an aged care facility, to understand patterns of movement in a room and also potentially to send an alert to someone if they're doing something abnormally to see if they should be checked.

Thumbnail 880

There's a dashboard as well. When you're doing a product, I call that a Minimum Lovable Product. Minimum Viable Product is your quick and dirty Grafana dashboard. Minimum Lovable Product is the product that you want your users to use. Listen to their feedback and build it out based on what they want, not what you want.

Thumbnail 900

Thumbnail 910

Going low code is fast and gets buy-in. Building some quick dashboards—everyone loves dashboards, everyone loves data. Utilize CloudWatch Metric Filters to have custom filters for data that you care about. They're very underrated and very easy to do. Understand your batch and real-time needs. You could see that having real-time actually adds a bit of complexity, so you only need that for the things that you really need real-time for. Most of the time, batch is enough.

Thumbnail 920

Thumbnail 930

Thumbnail 940

Don't over-build for real-time when it's not needed. There are other ways to do that, but this is the way I've chosen to do it. Keep it simple and only build complexity in when you need to. Visualizing the events is super important, and enriching your event payloads is critical. Make sure that wherever your event goes in your system, you enrich it with a date and time and what you know was added to that event, because one day you'll wonder how this event got somewhere and you'll need to see the trace. That's very useful for debugging later.

Thumbnail 950

Thumbnail 980

An MVP proves it works, and an MLP makes it indispensable, or at least that's what I hope. I also have a podcast and I also have a co-founder who couldn't be here today. This is her part of the presentation. Come and listen—we talk about cloud and AI topics. Without Georgia helping me, this product would not exist. So thank you very much for listening. I hope you can take something away from this. If you've got any questions, I don't have time to take them here, but come and see me afterwards. Appreciate your time, thank you very much.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)