Motivation
Seeing how mental health and dementia challenges affect people around me has grown my compassion. I’m passionate about software development and want to contribute in a small, practical way—and I hope this can encourage others who care to take action too.
As a hobby project, I’m planning to build helloTali: a simple app to help people reflect on mood and cognition over time, and to better support caregivers. I shared the early planning here:
https://dev.to/harrykhlo/applying-vertical-slice-clean-architecture-planning-hellotali-2khh
To ground the idea in existing research (and avoid reinventing the wheel), I prepared a ChatGPT-assisted literature review on on AI-Powered Applications for Mental Health and Dementia Care. This review is a starting point for planning my helloTali MVP and design decisions. It helps me plan the first version of helloTali and guides my design decisions. The header image was generated in NotebookLM using this literature review.
Introduction
AI technologies are increasingly embedded in digital platforms to support mental health and dementia care. This report examines four categories of applications – Web Apps, Mobile Apps, Wearable Devices, and Personal Alarm/Smart Home Systems – outlining use cases, AI contributions (improved care, early detection, personalization, safety), examples of implementations, evidence of effectiveness, key technical/design features, and a comparison of benefits and challenges. We also discuss limitations and future directions.
Web Applications in Mental Health and Dementia Care
Web-based platforms can deliver accessible mental health interventions and remote dementia support via browsers on computers or tablets. Use cases include online cognitive assessments, therapy chatbots, telehealth portals, educational resources for caregivers, and moderated support communities. AI is used to screen and triage users, provide therapy exercises, monitor user inputs for risk, and personalize content.
AI’s Role: In mental health, AI web apps often employ natural language processing (NLP) and chatbots to simulate therapeutic conversations or guide users through cognitive-behavioral therapy (CBT) exercises. For example, the NHS in the UK has integrated an AI-driven intake chatbot called Limbic Access to streamline psychotherapy referrals. This conversational AI conducts initial mental health assessments online and has been shown to improve service efficiency – after implementation, clinics saw shorter wait times, lower dropout rates, more accurate treatment allocation, and slightly higher recovery rates for patients[1]. AI can also support diagnosis and early detection: web-based cognitive testing platforms enhanced by machine learning can detect subtle patterns in test performance. Studies show that AI-based computerized cognitive tests for dementia improve discrimination sensitivity by ~4% and specificity by ~3% compared to traditional pen-and-paper tests[2]. Similarly, AI analyzing speech and language via web apps (e.g. asking a user to describe a picture or speak) can pick up early cognitive impairment – combining acoustic and linguistic speech features has achieved up to 94% accuracy in distinguishing dementia from normal aging[2].
Notable Examples:
- Online Therapy Chatbots: Web chatbots like Woebot and Wysa are accessible via browsers (and mobile) and provide CBT-informed conversations. Woebot, for instance, engages users through text-based dialogue and employs NLP to respond with appropriate therapeutic techniques[9]. These services aim to be available 24/7 to supplement human therapy. Early evidence suggests such chatbots can deliver elements of CBT well enough to modestly reduce depression and stress levels[10]. In one trial, users of an AI chatbot for mental well-being reported a 34% improvement in mood and 32% reduction in anxiety after about five months[10].
- Digital Screening Tools: Web-based mental health assessment portals use AI to evaluate self-reported symptoms. For example, the “Limbic” AI system (used in IAPT services) automatically assesses questionnaire inputs. Its deployment was associated with a slight increase in patient recovery rate (from 47.1% to 48.9%) and freed clinicians from some administrative tasks[1]. Another AI-driven diagnostic tool was able to reach 89% accuracy in diagnosing mental disorders while asking only 28 adaptive questions online, showing how AI can streamline assessments[1].
- Caregiver Support Platforms: Web apps tailored for dementia caregivers use AI to provide personalized guidance. A 2020 review found many prototype systems that supply information on daily care, schedule reminders, or decision support for caregivers[3]. For instance, web platforms are being developed to connect caregivers with resources (legal, financial, respite services) matched to their context using AI recommendation agents[13]. Although many are still in trial phases, they show potential to reduce caregiver burden by offering real-time advice and education[3].
Effectiveness and Validation: Evidence for web AI tools is growing but mixed. On the positive side, AI chatbots have been found to engage users and reduce waitlists by handling mild cases or preliminary interactions[1]. Some controlled studies have shown symptom improvements – e.g. an RCT of the Tess chatbot with college students found significantly lower depression and anxiety symptoms in the chatbot group compared to controls[1]. Moreover, AI triage systems like Limbic have improved clinic operations and patient outcomes modestly[1]. However, no AI chatbot is yet approved as a standalone medical treatment, and human oversight remains critical[7, 8]. Clinical validation is underway: for example, ongoing trials are evaluating whether AI-driven online CBT platforms can maintain long-term efficacy and safety compared to human therapy. In dementia care, AI-enhanced cognitive tests and screening (often delivered via web) have shown higher sensitivity in identifying early Alzheimer’s than standard tests[2], and machine learning applied to brain scan data has reached ~92% accuracy in classifying Alzheimer’s in research settings[2]. These are promising, but larger validation studies and regulatory approvals will be needed before such tools become routine in practice.
Technical and Design Features: Successful web applications emphasize usability and privacy. They often incorporate simple, senior-friendly interfaces (large text, clear prompts) since caregivers or older adults may be users[6]. AI chatbots on web platforms use NLP models to interpret user text and sometimes sentiment analysis to tailor responses; for example, Woebot’s backend selects from a library of clinician-written response scripts based on the user’s input and mood, using classification algorithms to decide the best therapeutic technique for the moment[9]. Privacy is paramount: reputable platforms encrypt user data and comply with health data regulations (e.g. HIPAA in the US). Woebot Health explicitly treats even non-clinical user data as Protected Health Information and does not sell or share data, deploying measures like secure cloud environments and annual external audits[9]. Another critical feature is risk monitoring – web-based AI mental health agents are trained to detect “red flag” language (e.g. references to self-harm or severe distress). For instance, Woebot’s NLP algorithm can recognize potentially concerning language; if a user types something indicating crisis, the system immediately offers resources and emergency contact information[9]. This kind of safety net is vital since purely automated web services must still ensure user safety. Additionally, web apps in professional settings integrate with clinical workflows (e.g., exporting a summary to a clinician or EHR system) to augment human care rather than operate in isolation[1].
Mobile Applications in Mental Health and Dementia Care
Smartphone apps are among the most widespread AI-driven mental health tools. They leverage the phone’s ubiquitous presence and built-in sensors to deliver interventions and monitor wellbeing continuously. Use cases on mobile include mood-tracking apps with AI insights, conversational agents (“AI therapists”) via text or voice, digital phenotyping that analyzes phone sensor data for mental state changes, cognitive training games that adapt to user performance, reminders and daily living assistance for dementia patients, and caregiver apps linked to smart sensors.
AI’s Role: Mobile apps can combine active user input with passive data collection to provide personalized and early detection features. For example, AI algorithms in a smartphone app might analyze typing speed and pattern, speech tone from voice notes, sleep and activity data from phone sensors, etc., to detect early warning signs of depression or mania. This approach, known as digital phenotyping, uses machine learning to find patterns correlating with mood changes (e.g. reduced texting frequency and late-night phone usage might flag worsening depression). In practice, apps like Mindstrong and BiAffect have explored these techniques, using AI to correlate keyboard dynamics with cognitive and mood states for instance, irregular typing rhythms can indicate a manic episode onset(1). AI on mobile also enables in-the-moment interventions – chatbots or virtual coaches can engage users through chat or even augmented reality. One well-known mobile AI therapist is Wysa, an app featuring a chatbot “penguin” that uses a mix of rule-based and machine learning NLP to converse with users, teaching coping skills. Similarly, the Woebot mobile app delivers short daily conversations that simulate aspects of CBT. Research indicates these AI-driven apps can produce modest but positive outcomes: early studies found chatbot apps could deliver CBT elements effectively enough to reduce self-reported depression and stress, albeit to a lesser degree than traditional therapy[10]. They excel in accessibility and engagement: one survey reported 80% of people who tried using ChatGPT on a mobile device for mental health advice felt it was a “good alternative” to therapy, at least for guidance or psychoeducation[10]. In dementia care, mobile apps often serve caregivers and patients by providing memory aids, reminders, and tracking. AI can personalize these functions – e.g., learning a particular patient’s daily routine and tailoring prompts accordingly.
Notable Examples:
- AI Chatbot Therapists: Woebot (for depression/anxiety) and Wysa are popular smartphone apps that use conversational AI to guide users through therapeutic conversations. Woebot’s chatbot has been clinically evaluated in some studies; for instance, one trial found users experiencing significant reductions in anxiety after 2 weeks with the app[1]. These apps focus on short, frequent interactions and use reinforcement learning or scripted decision trees to keep users engaged. Another example is Tess, a mental health chatbot tested in a college student population – in a randomized study, those who chatted with Tess showed a greater decrease in depression and anxiety scores compared to a control group[1]. Replika (an AI companion app) and Youper (an AI CBT coach) are other real-world instances of mobile AI for mental wellness. They illustrate how AI can be available on-demand to talk users through difficult moments or exercises.
- Mood & Symptom Trackers: Apps like Earkick prompt users to log mood via text, voice, or even short video, then use AI to analyze the input. In the Earkick app (mentioned in Time magazine), a user can record a 20-second voice note about their day; the app’s AI analyzes it for anxiety indicators and provides a personalized recommendation within seconds[10]. According to the company’s data, using Earkick regularly for ~5 months led to a 34% mood improvement and 32% anxiety reduction on average among users[10]. While such claims need independent validation, they show potential for AI to track and improve emotional health by recognizing patterns (e.g., detecting stress in voice tone and suggesting a relaxation exercise). Another example is Mindstrong, which monitors metrics like typing speed, sleep duration (via phone motion sensors), and phone usage patterns; its AI model flags deviations that correlate with relapse of conditions like depression or schizophrenia, alerting clinicians for proactive care[1].
- Apps for Dementia Support: Mobile apps are also aiding dementia care. CogniCare, for instance, is a caregiver-focused app that uses AI recommendations to suggest activities for dementia patients and provides tailored caregiving tips (based on disease stage and the patient’s preferences). Some apps for patients serve as cognitive aides: they might use face recognition via the phone’s camera to help identify family members (displaying names/relation), or use voice assistants to guide a person through daily tasks step-by-step. While many such apps are in prototype stages, one innovative concept is the VisionXcelerate project (developed by student innovators) – smart glasses paired with a mobile app that employ AI and augmented reality to assist dementia patients[11]. The glasses can recognize faces/objects and remind the wearer who or what they are seeing, and the system’s virtual assistant (with a familiar loved-one’s voice) prompts the user to take medications or drink water at scheduled times[11]. The accompanying mobile app lets caregivers monitor the patient’s status remotely receiving alerts if the patient misses a prompt or needs help(11). This example showcases how mobile-connected AI can support both independence and safety – the glasses even include fall detection and GPS geofencing to alert caregivers if the person falls or wanders beyond a safe area[11]. Although VisionXcelerate is experimental, it points toward future mobile-integrated solutions that combine wearable sensors with smartphone AI.
Effectiveness and Validation: Many mobile mental health apps are being studied for efficacy. Generally, AI-augmented apps improve engagement and scalability of care, but their clinical effect sizes are often modest. A recent scoping review of 36 studies on AI digital mental health tools found they were most effective as support or augmentation, rather than standalone treatments[1]. Reported benefits included reduced wait times for therapy (by offering immediate AI support), increased user engagement in care, and improved symptom tracking over time[1]. For example, one analysis noted that an AI chatbot’s ability to encourage emotional disclosure led to higher user satisfaction and intention to keep using the tool[1]. Some mobile interventions show outcome improvements: in one study, an AI-based mobile mental health program using behavioral activation techniques led to significant mood improvements from pre- to post-usage in a pilot group[1]. Another found that personalizing the chatbot’s conversational style to match user personality traits (like conscientiousness) improved user engagement[1]. For dementia, rigorous data are scarcer since many apps are newer. However, studies on related tech (like reminders or tracking devices) have documented practical benefits such as reducing anxiety for carers and preventing dangerous situations e.g., timely alerts for wanderers(6). What is clear is that user acceptance is crucial: surveys indicate many people (especially younger demographics) are open to AI mental health support – one survey of students showed increasing awareness and generally positive perceptions of conversational AI for mental health[1]. Still, clinical validation through RCTs and long-term outcome studies is ongoing. No app can replace human clinicians for complex cases, and some research warns that over-reliance on unregulated apps could be harmful if they give poor advice or fail to detect crises[12]. Thus, effectiveness seems greatest when mobile AI apps are integrated into a stepped-care model, providing immediate help and data while flagging those who need escalation to human care.
Technical and Design Features: Mobile mental health and dementia apps exploit smartphones’ rich sensor suite and connectivity. Key sensors include the accelerometer and gyroscope (for movement and gait tracking), GPS (for location – e.g., geofence alerts if a person with dementia wanders out of a safe zone), microphone (for voice input and even passive audio monitoring of tone or cries), and touchscreen interactions (typing dynamics, usage patterns). In fact, a 2023 review found accelerometer data was used in 91% of studies on wearable/mobile AI for depression/anxiety detection, and phone/wearable heart rate sensors (PPG) in about 45%[4]. Apps process these signals with machine learning models – often on backend servers due to computation needs – to classify mental states or predict risks. Common algorithms include Random Forests, SVMs, and neural networks, frequently achieving high accuracy in lab settings (e.g., a pooled analysis of wearable/mobile data models found a mean accuracy of ~89% for detecting depression vs. no depression in best-case models[5]). Many apps implement on-device AI for responsiveness: e.g., voice analysis might happen via an on-device model for privacy, while more complex pattern recognition might occur in the cloud. Personalization features are also embedded – apps may learn an individual’s baseline patterns and adjust alerts to reduce false positives (for instance, learning that a user is normally sedentary on Sundays so it doesn’t flag that as a depression sign). User experience (UX) design is crucial given the diverse user base (including elders who may not be tech-savvy). Apps often use simple chat interfaces or virtual avatar guides to make interaction intuitive and friendly. Privacy and security measures include requiring user consent for data collection, local data encryption on the device, and anonymizing data sent to servers. Some apps allow users to opt in to sharing data with clinicians or family (especially in dementia care scenarios). Safety mechanisms are built in as well: for example, if an AI mental health app detects keywords suggesting suicidal ideation, it may automatically display a crisis helpline button or notify a human responder (with prior consent). In caregiver apps, if a dangerous event is detected (like a fall or the person leaving the home at 2 AM), the app will send immediate push notifications or calls to designated caregivers. Many platforms also incorporate recommender systems – for instance, recommending specific coping exercises based on the user’s recent mood trend, or suggesting engaging activities for a person with dementia based on what has worked in the past. A study even explored AI-driven recommender systems to personalize therapy content in digital mental health apps and found promise in improving treatment engagement and outcomes[1].
Wearable Devices in Mental Health and Dementia Care
Wearable technology – such as smartwatches, fitness bands, smart clothing, and sensor devices – provides continuous physiological and behavioral data that AI can analyze to support mental health and dementia care. Use cases for AI-powered wearables include early detection of mood or cognitive decline via passive monitoring, real-time safety alerts (e.g. fall detection, wandering alerts for dementia patients), and personalized interventions (like a calming vibration when stress is detected, or automated reminders triggered by sensor readings). Wearables can also encourage healthy routines (exercise, sleep) which benefit mental health, using AI coaching.
AI’s Role: The sensors on wearables collect rich biometric and activity data. AI comes into play by interpreting this data to infer mental or cognitive states that aren’t directly observable. For mental health, wearables capture signals like heart rate variability (anxiety or stress indicator), electrodermal activity (sweat/skin conductance, related to arousal), sleep patterns (disturbed sleep can herald mood episodes), activity levels (low activity might signal depressive withdrawal), and even voice tone (some wearables have microphones to analyze speech or vocal biomarkers of emotion). Machine learning models, often trained on clinical datasets, look for patterns correlating with conditions like depression, anxiety, or even schizophrenia. For example, one experimental AI model monitored wearable data (including vitals and movement via RFID tags) in psychiatric patients and could alert care teams to signs of agitation or self-harm risk, enabling timely intervention and improved patient safety[1]. In dementia, gait analysis via wearables is an emerging AI application: subtle changes in walking speed, stride, or balance captured by a smartwatch’s accelerometer could indicate cognitive decline or heightened fall risk. AI algorithms have been able to distinguish early Alzheimer’s patients from healthy controls by analyzing such motion data patterns[2]. Another role of AI is to predict future episodes – for instance, using today’s sensor data to predict the likelihood of a panic attack or a wandering event tomorrow, so preventive steps can be taken. In a scoping review of 69 studies on AI and wearables for anxiety/depression, about 19% of the systems aimed to predict future mental health states based on current and past biosignals[4]. This predictive power is unique to AI, as it can mine complex time-series data from wearables that humans would struggle to interpret. AI also helps in personalizing thresholds: one person’s “normal” heart rate might be high for another, so AI models can learn individual baselines and detect changes more accurately than fixed thresholds. Importantly for dementia care, AI-enabled wearables can automate safety monitoring – e.g., if an elderly person with dementia forgets to take their medication by a certain time (some wearables can detect medication box opening or use smart pill dispensers), the system can prompt them or alert a caregiver.
Notable Examples:
- Smartwatches and Bands: Devices like the Apple Watch, Fitbit, or Samsung Galaxy Watch increasingly include health algorithms. While primarily fitness-oriented, they have been used in mental health research. The Apple Watch, for instance, can perform continuous heart rate and variability tracking; researchers have used these data along with activity and sleep metrics to train AI models predicting depression. In one study, a combination of smartwatch data and machine learning achieved around 80–90% accuracy in identifying individuals with depression vs. healthy controls[5]. Some commercial smartwatches now feature stress monitors (which are AI models inferring stress from heart rate patterns). Garmin and Fitbit devices will alert users to unusually high stress levels and even suggest guided breathing exercises – a simple form of AI-driven intervention. For dementia, fall detection is a key feature: the latest Apple Watch, for example, uses accelerometer and gyroscope data with an AI algorithm to detect hard falls and can automatically call emergency contacts if the person doesn’t move afterward. This has already saved lives in general populations and is highly relevant for dementia patients prone to falls. However, as discussed below, wearables alone have limitations for this group (memory to wear the device, etc.).
- Biosensor Patches and Clothing: Beyond watches, there are wearable patches (like MoodPatch, hypothetical example) that stick to the chest to monitor physiological signals continuously. These can capture high-fidelity ECG, respiration rate, etc. AI algorithms on the paired smartphone or cloud can analyze this data to detect panic attack onset by spotting changes like rapid heartbeat and shallow breathing and then alert the user or their caregiver. Another example is smart clothing – e.g., socks with pressure sensors or insoles with IMU (inertial measurement units) to detect gait changes in early Alzheimer’s. Research prototypes of smart insoles combined with AI have shown success in distinguishing dementia-related gait slowing and variability from normal aging, offering a potential early warning sign of cognitive decline[2].
- Dedicated Mental Health Wearables: A few startups have created wearables specifically for mental health. The Spire Stone (a small clip) was designed to detect anxiety through breathing patterns using an accelerometer; its AI-classified breathing data could notify users when their breathing indicated tension, prompting a relaxation exercise. Embrace by Empatica is a wristband originally made for epilepsy seizure detection (using AI on skin conductance and motion), but it has also been explored for detecting high stress and panic states. These wearables demonstrate specialized sensors (e.g., Embrace’s electrodermal sensor) and tailored AI models for mental health monitoring. In academic research, multi-sensor wearables have detected depression with impressive accuracy. One deep learning model using a combination of wearable motion and audio data could detect depression with up to 99% accuracy in a controlled dataset[1], though such performance is likely to be lower in real-world settings.
- Wearables for Wandering and Safety: For dementia, GPS tracker wearables (like pendants or bracelets) are common – while many operate on simple GPS location rules (alert if the person leaves a geofenced area), AI is starting to enhance them. For instance, integrating motion pattern recognition can differentiate purposeful walking from aimless wandering. Some devices learn the person’s daily walking routes; if the individual deviates oddly (e.g., pacing in a small area which might indicate confusion or searching behavior), AI could flag this as a potential pre-wandering state and alert a caregiver to check in before the person gets lost. Wearable tags in clothing combined with indoor location sensors (RFID, Bluetooth beacons) have been tested in memory care units: AI can map these indoor movements to detect patterns of agitation or exit-seeking behavior, helping staff intervene proactively[1].
Effectiveness and Validation: Wearable-based AI holds great promise, and some evidence is emerging. A 2023 meta-analysis of wearable AI for depression detection found a pooled accuracy of about 89% (CI 83–93%) for the highest-performing algorithms across studies – meaning in research settings, wearables coupled with AI can correctly classify depressed vs. non-depressed individuals roughly 9 out of 10 times. Sensitivity (true positive rate) averaged ~87% and specificity ~93% for the best cases[5]. However, there was wide variability (some studies reported as low as ~56–70% accuracy on certain measures), indicating inconsistencies and potential bias in current research[5]. Indeed, many studies had small samples or homogeneous groups, so models may not generalize broadly[5]. For anxiety, similar progress is noted: wearables can often detect acute anxiety with physiological signals (like detecting a panic attack a few minutes before onset by rising heart rate and skin conductance). In dementia, wearables for fall detection and wandering prevention are effective when used, but adherence is a challenge (discussed below). There is evidence that technology including wearables can reduce certain dementia-related risks – e.g., one study noted that use of wearable monitoring devices and motion sensors addressed caregivers’ safety concerns about wandering, and overall such assistive tech can reduce caregiver stress and improve patient quality of life[6]. However, formal trials measuring outcomes like reduced fall rates or delayed institutionalization due to wearables are not yet common. A lot of validation comes from technical performance metrics (accuracy, recall, etc., of detection algorithms) rather than clinical endpoints. Nevertheless, early deployment projects are promising. For example, some memory care homes using wearable wander detectors combined with AI report fewer serious incidents (lost patients) by receiving early alerts. It’s also worth noting many wearables (like general fitness trackers) have FDA clearance for health metrics, but AI mental health interpretations of their data are not regulated yet. This means users and clinicians must interpret insights cautiously. Overall, wearables have shown they can detect patterns correlated with mental health and dementia changes, but proving that this detection improves outcomes (like preventing depression relapse or injuries) is the next step.
Technical and Design Features: Wearables bring unique technical considerations. They gather multimodal data – motion, physiological signals, sometimes environmental data (e.g., light exposure, noise via microphone). AI models for wearables often need to fuse these data streams. As noted in a scoping review, most algorithms in this domain are classic machine learning (decision trees, random forests, SVMs were the top methods) rather than massive deep learning, likely due to the relatively small data sets and the need for interpretable results[4]. However, deep learning (like convolutional or recurrent networks) is used in some cases, especially to handle complex time-series patterns. One technical challenge is ensuring real-time processing on device for immediate alerts. Some fall detection algorithms, for instance, run on the wearable or linked smartphone to issue an instant alarm without needing cloud computation. This requires optimizing models for low power and limited CPU. The trade-off is often between running a simpler threshold-based algorithm (low false negatives but many false alarms) vs. a smarter AI model (fewer false alarms, but needing more resources). New wearables leverage dedicated AI chips (like the Altumview Sentinare sensor uses an onboard AI chip) to do privacy-preserving vision processing – converting video of a person into abstract stick-figure data on the device itself[14]. Battery life is another key design factor: algorithms may sample sensors strategically (e.g., monitoring heart rate continuously only during certain periods or in response to motion) to conserve power so that devices last longer between charges. For user comfort, wearables must be non-intrusive and easy to use. AI can help by enabling hands-free interaction – for example, a user might simply wear the device and the AI does everything in the background, rather than requiring frequent app checks. In context of dementia, device design might include simplified alerts (like a wearable that vibrates or lights up when the AI detects the person should do something, accompanied by a voice prompt from a nearby speaker). Integration with mobile apps and web dashboards is common: the wearable sends data to an app that uses AI to generate insights, which are then presented to the patient, caregiver, or clinician in an understandable format (e.g., “Today’s stress level: High, due to poor sleep and high heart rate variability” with suggestions). Privacy with wearables involves securing wireless data transmission (Bluetooth/Wi-Fi encryption) and often anonymizing data at rest. Importantly, wearables for dementia must account for cognitive limitations: devices are designed to be wear-and-forget, sometimes even hidden in jewelry or sewn into clothing, to avoid reliance on the user’s memory or willingness to use them[15]. This is where design overlaps with personal alarm systems, discussed next.
Personal Alarm Systems and Smart Home AI for Dementia and Safety
Personal alarm systems refer to devices or home installations that can automatically detect emergencies or dangerous situations and alert caregivers or emergency services. Traditionally, these include medical alert pendants (emergency button necklaces) and home monitoring sensors (motion detectors, door alarms). AI is now enhancing these systems by enabling passive, context-aware monitoring that does not rely solely on the user to trigger an alarm. This category overlaps with Ambient Assisted Living or smart home technologies for independent living. For mental health, personal safety systems might detect crises (e.g., a senior with depression who falls or a person with PTSD experiencing a nighttime panic episode picked up by a smart sensor). For dementia, the focus is on safety: fall detection, wandering prevention, medication adherence, and emergency response.
AI’s Role: AI enables personal alarm and home systems to go from basic threshold triggers to intelligent situation assessment. For example, a simple fall alarm might trigger whenever acceleration > X is detected; an AI-based fall detection uses more complex pattern recognition (from accelerometer or camera data) to distinguish a true fall from someone sitting down quickly or dropping their device, greatly reducing false alarms[15]. AI can also integrate data from multiple sensors: motion sensors in rooms, smart door locks, bed pressure sensors, etc., to understand the person’s activity. This way, the system might infer “it’s 2am, the bed is empty, the front door just opened” – an AI could label this pattern as a likely wandering event and immediately alert a caregiver before the person gets far[14]. In less urgent scenarios, AI can monitor activities of daily living via the environment – for instance, an ambient sensor notices the stove was left on too long without movement in the kitchen, and an AI could decide to turn it off or ping a caregiver. In mental health contexts, AI in smart speakers (like Amazon Alexa or Google Home) can potentially detect vocal signs of distress or calls for help. Some smart home systems listen for certain sounds (e.g., a yell or groan after a fall) and use AI speech recognition to determine if a distress signal was uttered. Personalization is also key: these systems can learn an individual’s routine. If an elderly person always enters the bathroom at 7am and takes 15 minutes, the AI can be set to check – if 7:30 comes and they haven’t left the bathroom (perhaps indicating a fall or confusion), it can trigger a “check-in” alert (first maybe calling into the bathroom via intercom, then alerting someone if no response). This adaptive learning greatly improves safety while minimizing nuisance alarms. AI-driven voice assistants in alarms can also provide reassurance: e.g., if a dementia patient is agitated at night, an AI system might detect that via unusual movement and use a soothing voice prompt or turn on lights gradually (non-intrusive intervention), potentially preventing escalation.
Notable Examples:
- AI Fall Detection Sensors: HomeGuardian is an AI-powered fall detection device that exemplifies advancements over standard wearable fall alarms. It’s a standalone unit with an optical sensor (camera or depth sensor) and AI vision algorithms. HomeGuardian monitors the environment 24/7 and can recognize human figures and their posture. If it “sees” a person on the floor, it interprets this as a fall and sends an immediate alert to caregivers or a monitoring service[15]. Unlike a pendant, it is non-wearable and hands-free, meaning the person with dementia doesn’t need to remember to wear or activate anything[15]. Its AI is tuned to minimize false alarms by ignoring normal movements (sitting, kneeling) and focusing on true falls with an impact – the system continuously learns from the environment to improve accuracy over time[15]. It also addresses privacy by processing video internally no footage is stored or sent to the cloud, only alerts(15). Another example, Altumview Sentinare, similarly uses an AI camera that converts the person’s image to a stick-figure silhouette in real time to protect privacy, yet still detects falls, bed exits, and wandering events[14]. It can even detect if someone is slowly sliding off a chair (a slow fall) and not just sudden falls[14]. These AI sensors can cover an entire room or home if multiple units are placed, providing full coverage as opposed to a wearable that only works if worn and within range[15].
- Wandering Alert Systems: There are GPS-based personal alarms like SafeLink or Project Lifesaver devices which a person wears, and caregivers get an alert and location if they stray beyond a boundary. AI is improving these by analyzing movement patterns. For instance, an AI-enhanced system might notice that a person has been near the front gate repeatedly in a short span (possibly attempting to leave) and send a heads-up to the caregiver before they actually wander far. The aforementioned Sentinare system allows setting “regions of interest” – if a dementia patient enters a restricted area (like opens the front door or leaves their bedroom at night), it will generate an alert[14]. Similarly, Night Nurse (an AI system used in care facilities) is reportedly adding a wandering detection feature that uses computer vision to monitor patient movement in corridors and detect patterns indicative of wandering, with very few false alarms[16].
- Smart Home Integrations: Amazon’s Alexa Together service is a recent example integrating AI assistants with safety – while Alexa itself isn’t “AI monitoring” the person’s behavior deeply, it supports third-party fall detection sensors (like the Vayyar radar-based fall detector) that connect to Alexa. If a fall is detected, Alexa’s voice interface will attempt to ask the person if they need help and can call a 24/7 urgent response line or notify a family member automatically[17, 18]. This approach merges AI detection (from the sensor) with an AI-driven communication channel (Alexa’s voice and call capabilities) to handle emergencies. Additionally, voice assistants can be programmed with routines: for example, if by 9am no motion is detected in the kitchen (meaning the person hasn’t started their normal routine), Alexa could verbally check in: “Good morning, are you okay?” and if no response, alert a caregiver. These are logic-based but can be refined with AI learning an individual’s habits. Another interesting development is AI-enhanced security cameras (like Google’s Nest Cam with activity zones) that can send alerts if, say, an elderly person hasn’t returned to bed after an hour of roaming the house at night – these cameras use AI to classify persons vs objects and can distinguish one person from another with face recognition, which might inform custom alerts.
Example of an AI-based fall detection system in use: The system’s depth camera analyzes an older adult’s movements, converting them into a stick-figure representation (bottom) to preserve privacy. If a fall is detected (“Incident Detected”), an alert is immediately sent to caregivers via a connected app[14].
Effectiveness and Validation: Personal alarm systems are fundamentally about safety – their effectiveness is measured in avoided harms and response times. AI-based systems aim to improve detection accuracy and speed. Reports from providers like HomeGuardian claim a drastic reduction in false alarms compared to traditional sensors[16], meaning caregivers face fewer needless panic calls. Lower false alarms are not just a convenience; they ensure real alerts are taken seriously and reduce “alarm fatigue.” AI fall detectors using vision or radar have shown high sensitivity in lab tests (often >90% detection of actual falls) while keeping false positive rates low (near-zero in controlled scenarios) by differentiating activities[15]. The Sentinare device, for instance, underwent upgrades with the latest AI models to further cut false alarms, as noted in a 2025 firmware update[14]. In terms of outcomes, it is logical (though still being empirically studied) that faster detection of falls or wandering in dementia will reduce complications (e.g., long lies after a fall leading to dehydration or injuries, or lost individuals facing hazards). One challenge is user compliance with older alarm systems – studies find many seniors with dementia forget or refuse to wear their pendants or disable alarms[15]. AI ambient sensors sidestep this, and preliminary caregiver feedback indicates higher reliability since the system is always “on.” A systematic review of AI for Alzheimer’s caregivers noted that many prototypes successfully detected target events like falls with acceptable accuracy, but few had moved to wide deployment[3]. Those in pilot use have shown positive feasibility – caregivers report feeling more secure knowing an AI is watching out and often express satisfaction, though some studies reported mixed feelings perhaps due to privacy concerns or trust issues(3). As for mental health crises, it’s early – but there have been instances where smartwatches or home assistants detected an anomaly (like extreme heart rate or a user shouting) and helped initiate rescue. Overall, AI personal alarms appear effective in technical detection; the human outcomes (injury reduction, etc.) are expected to be beneficial, but large-scale data will come as these systems gain adoption.
Technical and Design Features: These systems blend hardware (sensors) with AI software. Sensor types include video cameras, depth sensors (which give a 3D map without fine details, used for privacy), infrared motion sensors, pressure mats (on beds or chairs), contact sensors (doors/cabinets), microphones, and radar (RF sensors that detect motion and even respiration through walls). Increasingly, sensors are paired to AI at the edge (on-device) to address privacy and latency. For example, the Sentinare’s camera feeds into an AI chip that outputs only stick-figure abstractions and event labels, never raw video[14]. This design allows placement even in private areas (bedrooms, bathrooms) without as much concern[14]. Connectivity is central: these alarms connect via home Wi-Fi or cellular networks to relay alerts. Many have backup batteries and cellular links for reliability. The AI algorithms used vary: computer vision models (often convolutional neural networks) analyze camera feeds for falls or unusual activity; time-series models analyze sequences from motion or pressure sensors. Some systems utilize fusion AI – combining different sensor data to improve confidence (e.g., both a motion sensor and sound sensor triggering at once might confirm an event). User interface considerations include how alerts are delivered (smartphone app notifications, automated phone calls, even integration with emergency services). The systems often provide a caregiver dashboard where they can tune settings (like alert sensitivities or define “safe zones” on a map). In terms of privacy and ethics, transparent design is important: families must be informed if cameras or trackers are in use, and many devices focus on just the needed data (e.g., only transmitting an alert with timestamp and type, not continuous monitoring to third parties). Robustness is also a design focus; AI models must be tested for edge cases (ensuring a pet or a dropped object doesn’t trigger a false fall alert, for example). Developers have started employing large AI models and continual learning to improve robustness – the Sentinare recently incorporated a larger AI model to better ignore false triggers[14]. Another technical feature in some systems is two-way communication: if an alarm goes off, the system might automatically open an audio channel so the caregiver can speak to the person (“Mom, are you okay?”) or the device might play a message (“Help is on the way”). This requires integration of AI voice recognition to possibly interpret responses (if the person says “I’m fine”, the system might cancel an alert if confident). Lastly, security is crucial since these devices operate in homes – they employ encryption and secure protocols to prevent hacking or leakage of sensitive data (particularly for video/audio-based systems).
Comparative Benefits and Challenges Across Platforms
Each platform – web, mobile, wearable, and personal alarm/smart home – offers distinct advantages and faces specific challenges in mental health and dementia care. The following table provides a high-level comparison:
| Platform | Benefits (Strengths) | Challenges (Limitations) |
|---|---|---|
| Web Apps | • Wide Accessibility: Available on any browser, useful for reaching users who may not use smartphones (e.g. some older adults). • Rich Interface: Can provide detailed content (videos, interactive tests) on larger screens – good for cognitive assessments and caregiver education. • Integration: Easier to integrate with existing healthcare systems (e.g., hospital web portals) for telehealth and data sharing[1]. • AI Triage & Screening: Effective in early screening (e.g. online tests with ML scoring) and scaling therapy via chatbots, reducing wait times[1]. | • Digital Divide: Requires internet and some tech literacy – older dementia patients might struggle without assistance. • Engagement: Users may not consistently return to a website; drop-off rates can be high unless actively used (mobile apps tend to prompt more). • Privacy Concerns: Web apps handling sensitive data must ensure secure connections; users worry about data breaches online. • Lack of Personal Touch: Web chatbots might feel impersonal; building trust remotely is difficult, and some contexts (therapy) still benefit from human rapport[10]. |
| Mobile Apps | • Portability & 24/7 Use: Smartphones are with users all the time, enabling continuous support and monitoring (ideal for passive data collection and just-in-time interventions). • User Engagement: Can send push notifications, daily check-ins, and use gamification to keep users involved in their care. • Multimodal Sensors: Access to GPS, accelerometer, microphone, etc., allows rich data for AI mood inference, activity, social interaction patterns](https://www.jmir.org/2023/1/e42672/#:~:text=Accelerometer%2063%20%2891%29,187%2C44%20%2C%20140%2C54). • Personalization: AI can learn from personal data and tailor content or alerts (e.g., detect rising anxiety and suggest a meditation via the app). • Scalability & Cost-Effectiveness: Once developed, an app can serve thousands with minimal marginal cost, helping address provider shortages[1]. | • Data Privacy & Security: Phones carry highly personal data; any misuse or breach can expose sensitive information (mental health data). Ensuring encryption, secure cloud storage, and user consent is complex and vital[9]. • Variable Quality & Regulation: App marketplaces have many unvetted mental health apps. Few are clinically validated or FDA-approved, so efficacy varies and users can’t easily tell which are evidence-based[7]. • User Adherence: Many download health apps but stop using after a short time. Sustaining engagement is difficult, especially if the user’s motivation is low (e.g., in depression) or cognitive issues interfere in dementia. • Algorithm Bias and Accuracy: AI models may not be equally accurate for all demographics e.g., different speech patterns or phone usage behaviors in diverse populations could lead to bias](https://pmc.ncbi.nlm.nih.gov/articles/PMC12110772/#:~:text=management%20purposes%20rather%20than%20as,equitable%20digital%20mental%20health%20interventions). Also, false alarms (e.g., wrongly flagging risk) can cause anxiety or alert fatigue. |
| Wearable Devices | • Continuous Objective Monitoring: Wearables provide real-time, objective data (heart rate, activity, sleep) without relying on self-report – valuable for early detection of issues e.g., subtle sleep changes before mood relapse](https://www.jmir.org/2023/1/e42672/#:~:text=The%20included%20studies%20used%20AI,More%20than%2050). • Proactive Alerts: AI can trigger immediate alerts for physical signs of trouble (fall, arrhythmia, panic) and even call for help if user is incapacitated (some smartwatches auto-dial 911 on hard falls). • Independence & Safety: For dementia, wearables like GPS tags allow freedom of movement with a safety net (caregivers can locate the person quickly if they wander). This can prolong independent living with less supervision stress[6]. • Personalization of Health Insights: Wearables enable personalized baselines – AI learns one’s normal patterns and detects significant deviations, which improves accuracy over generic thresholds[4]. • Encouraging Healthy Behavior: They can nudge users to meet exercise or sleep goals (benefiting mental health) and provide biofeedback (like calming breathing exercises when stress is detected). | • Adherence & Comfort: Users may forget to wear devices, refuse them, or charge them irregularly. Dementia patients often remove wearables due to discomfort or forgetfulness[15]. If the device isn’t worn/charged, it’s ineffective. • Limited Context Awareness: Wearables measure the body well but not the environment – e.g., they know if you fell, but a wearable alone doesn’t know why (AI might misinterpret vigorous exercise as a fall or high heart rate from fever as panic). They often need integration with contextual data for full insight. • Battery Life & Technical Issues: Continuous monitoring drains batteries. Critical safety wearables must be charged regularly, which can be a fatal flaw if a device dies when needed. Technical malfunctions (sensor errors) can also occur, potentially missing events or causing false alerts. • Privacy/Ethics: Constant monitoring of physiological data raises privacy issues: who owns the data? Are users comfortable being “tracked” even if it’s for health? In dementia, ethical concerns arise in tracking location or biometrics without the patient’s full understanding or consent. Carers must balance safety with respect for autonomy[6]. |
| Personal Alarms & Smart Home | • Hands-Free Safety Net: Does not rely on the individual to activate or even wear a device – ideal for dementia care (no compliance needed). AI sensors work in the background, automatically detecting problems[15]. • Comprehensive Environment Monitoring: Can cover entire living spaces (multiple rooms), detecting issues anywhere (bedroom, bathroom, outdoors) – something wearables might miss if forgotten off-body. • Rapid Emergency Response: AI detection is instantaneous and alerts can be sent immediately, potentially faster than a human noticing or the person calling for help. Faster response reduces harm (e.g., getting help minutes after a fall vs. hours). • Reduction in Care Burden: Family or professional carers gain peace of mind knowing an intelligent system is always watching for critical events, which can reduce their anxiety and allow them to rest (the system effectively acts as a tireless sentinel). • Adaptability: Systems can be customized to individual homes and routines (set specific alert rules) and scaled – additional sensors or integrations can expand functionality (e.g., adding stove sensors, door alarms). Modern AI systems can also update themselves to reduce false alarms over time14. | • Installation & Cost: Setting up a smart home system can be complex and costly (installing sensors/cameras, subscription fees for monitoring). Not all families can afford or manage the technology. • Privacy & Autonomy Concerns: Monitoring systems (especially cameras or microphones) can feel invasive. Constant surveillance might impinge on the person’s privacy/dignity[6]. There are also ethical questions about consent if the patient cannot fully agree to being monitored. • False Alarms & Technical Glitches: While AI reduces false alerts, it won’t eliminate them entirely. False alarms can distress users (e.g., loud alarms) and burden responders. Conversely, technical glitches or sensor misplacement could lead to missed detections. Ensuring reliability is paramount because lives may depend on it. • Acceptance & Trust: Some users (and caregivers) may be wary of relying on “AI” for life-and-death situations. Gaining trust in the system’s accuracy is a hurdle – hence many systems still involve professional monitoring services as a backup. Additionally, older individuals might resist having unfamiliar high-tech devices in their home unless clearly beneficial. |
As the table shows, web and mobile platforms excel at broad access and flexible support but face engagement and privacy issues. Wearables provide continuous personal data but suffer compliance problems and limited context. Smart home alarm systems offer hands-off safety and comprehensive monitoring but raise privacy and acceptability challenges. In practice, these platforms can complement each other – for instance, a dementia care plan might use a home AI sensor for falls (personal alarm), a wearable GPS for outdoor safety, a mobile app for caregiver updates, and a web portal for clinicians to review data. The goal is to combine strengths while mitigating limitations.
Limitations and Future Directions
Despite the promise of AI in mental health and dementia care, current applications have important limitations. Many AI tools are in early stages of validation. Clinical evidence of long-term effectiveness is limited – improvements in app engagement or diagnostic accuracy do not always directly translate to better patient outcomes yet. There is a need for more rigorous trials and real-world studies to demonstrate that these AI interventions can, for example, reduce suicide rates, delay dementia worsening, prevent hospitalizations, or improve quality of life in a sustained way[3]. Existing studies often have small sample sizes or narrow populations, leading to potential algorithmic bias. AI models trained on one demographic may perform poorly for others (e.g., a speech-analysis dementia model trained on English may fail for non-native speakers). This raises concerns about fairness and the risk of misdiagnosis or missed issues in underrepresented groups[1].
Privacy and ethical concerns are at the forefront. These systems handle extremely sensitive data – inner thoughts shared with a chatbot, GPS locations of a wanderer, audio from within one’s home. Users must trust that their data is secure and won’t be misused. Ensuring data security, anonymization, and informed consent is an ongoing challenge. For example, continuous monitoring may feel like a loss of privacy for patients; striking the right balance between safety and autonomy is key[6]. Some dementia patients or mental health service users might not fully comprehend the AI technology, so consent and transparency become tricky ethical areas (families and providers need to make decisions in the patient’s best interest).
User acceptance and usability are practical limitations. An AI app or device is only useful if people actually use it correctly. As noted, many older adults with cognitive impairment forget or refuse devices[15]. If an AI interface is too complex, it may add frustration rather than relieve it[6]. Therefore, co-design with end-users (patients and caregivers) is essential to create solutions that fit into daily life seamlessly[6]. Future designs should emphasize simplicity, customizability (to individual needs and cultural contexts), and adaptability as conditions progress[6]. For instance, technology might need to switch modes as dementia advances – from a memory training app in early stages to a safety monitoring tool in later stages.
Another limitation is integration into healthcare workflows. Many AI mental health tools operate in a direct-to-consumer space, separate from one’s doctors or medical record. This can lead to siloed information – e.g., a psychiatrist may not know that their patient’s app has been flagging increasing suicide risk. There is a future need to integrate AI app data into clinical decision-making (with proper consent), possibly through dashboards for clinicians or alert systems that involve healthcare providers. Some systems like Limbic are integrated with clinical services improving efficiency in NHS clinics](https://pmc.ncbi.nlm.nih.gov/articles/PMC12110772/#:~:text=,and%20satisfaction%20with%20control%20group), pointing the way for others to follow.
Regulatory and safety oversight is still evolving. Agencies like the FDA are working on frameworks for “software as a medical device” and AI in healthcare. As of now (2025), no AI chatbot or mental health diagnostic app has full regulatory approval for standalone treatment[7]. In the future, we can expect stricter validation requirements, especially for tools that make diagnostic claims or autonomously intervene in care. This is good for ensuring efficacy, but current tools might need to be retooled to meet these standards.
Looking ahead, several future directions are promising:
- Multimodal and Multi-platform Integration: The next generation of solutions will likely combine data from wearables, phones, and home sensors to create a comprehensive picture of the individual’s status – an approach sometimes called the “digital phenotype”. For example, data from a smartwatch (heart rate), smartphone (social activity), and smart home (sleep patterns, mobility in home) could feed into one AI system for a dementia patient. By fusing these, the AI can more accurately detect changes (perhaps correlating a slight decline in walking speed, increased sedentary behavior, and more agitation at night as a pattern indicating disease progression). Multi-platform integration also means a person could seamlessly transition between tools: their web portal, phone app, and home devices all sync information. This holistic approach can improve accuracy and ensure continuity of support across contexts.
- Advanced AI and Personalization: The rise of large language models (LLMs) like GPT-4 opens new possibilities for mental health support. Future chatbots could be far more conversational and emotionally intelligent. Indeed, experiments with ChatGPT have shown it can simulate therapeutic conversations and even demonstrate higher-than-expected emotional insight in some evaluations[1]. However, careful fine-tuning and guardrailing are needed before such models can be safely deployed for therapy – issues of factual accuracy, handling of crises, and avoiding harmful content are paramount. On the personalization front, AI might adapt to each user’s personality and preferences; for instance, chatbots could adjust their tone (more formal vs. more friendly) to what the user engages with best[1]. Recommender systems might tailor not just content but also mode of intervention (maybe one person benefits from written exercises while another prefers mindfulness audio). For dementia, personalization may involve recognizing the person’s lifelong routines/habits and working within those to reduce confusion (e.g., tailoring reminders to their daily schedule and phrasing them in a familiar way, possibly even mimicking a loved one’s voice as the VisionXcelerate project did[11]).
- Enhanced Sensors and Modalities: Wearable and ambient sensors are continually improving. We may see non-invasive glucose and cortisol monitors for real-time stress biomarkers, or brain-computer interface headbands that detect early cognitive impairment via EEG patterns analyzed by AI. In homes, radar-based sensors can monitor people without cameras at all (preserving privacy while tracking movement and even respiration). The cost of sensors is likely to drop, making full smart-home coverage more feasible. Future personal alarm systems could incorporate robotics – e.g., a small robot that not only detects a fall via sound/vision but goes to the person and offers assistance or a video call connection to a doctor. Social robots might also provide companionship to address loneliness in dementia, something already trialed with simple pet-like robots that have shown mood benefits[6]. If imbued with AI, these robots could interact in a more human-like way, tell when a person is distressed and respond accordingly, etc.
- Focus on Preventive Care: A major promise of AI is shifting care to a preventive stance. For mental health, this means identifying risk of a crisis or relapse before it happens. We expect AI to improve at forecasting – for instance, predicting risk of a depressive episode weeks in advance by subtle trends, thus prompting a proactive therapy session or medication tweak. For dementia, preventive care might mean detecting preclinical cognitive decline (as some AI cognitive tests are attempting[2]) so that interventions and lifestyle changes can be applied early when they are most effective. AI could also help personalize lifestyle recommendations that have protective effects (diet, exercise, cognitive engagement), effectively serving as a coach for brain health.
- Improved Human-AI Collaboration: The future is likely not AI replacing human caregivers or clinicians, but augmented care. AI can handle monitoring and routine support, freeing up human professionals to focus on complex, high-empathy tasks. Therapists might use AI-generated analyses of a patient’s mood logs or speech to gain insights and make sessions more productive. Caregivers might rely on AI to watch over their loved one at night so they can sleep, improving their own well-being. There will be developments in interfaces that allow caregivers and clinicians to easily interpret AI outputs – e.g., dashboards highlighting key alerts, trends, and even AI suggestions (“Mrs. Smith’s agitation has been increasing in afternoons; consider engaging her in a calming activity around 3pm”). For this collaboration to work, trust and transparency must improve. Future AI systems will likely offer explanations for their alerts (to avoid the “black box” issue). For example, an AI might not just say “high fall risk” but explain “gait speed reduced 15% and step variability increased – correlating with higher fall risk”[2].
In conclusion, AI-powered applications are rapidly enriching the toolkit for mental health and dementia care. They bring continuous support, objective monitoring, and personalization that were previously hard to achieve. Sources like the American Psychological Association note that AI is poised to aid in everything from diagnostic support to treatment personalization in mental healthcare[1]. Meanwhile, dementia care research highlights the potential of technology to address safety, social connection, and daily living support[6]. Still, it is clear that AI is not a magic fix – it must complement human care, not replace it[10]. Addressing limitations around privacy, equity, user experience, and rigorous validation will determine how fully these innovations realize their potential. With user-centered design and interdisciplinary collaboration engineers, clinicians, caregivers, AI can be developed in a way that enhances quality of care while safeguarding human elements of empathy and autonomy. The future likely holds a hybrid model: compassionate human caregivers supported by intelligent systems working in the background – together providing better outcomes and quality of life for people with mental health needs or dementia and those who care for them.
References
[1] Ni, Jia (2025). AI-Driven Digital Interventions in Mental Health Care: Mapping Applications Across Screening, Support, Monitoring, Prevention, and Clinical Education. Healthcare (Basel). Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC12110772/
[2] Li R, et al. (2022). Applications of artificial intelligence to aid early detection of dementia: A scoping review on current capabilities and future directions. Journal of Biomedical Informatics, 127, 104030. DOI: https://doi.org/10.1016/j.jbi.2022.104030 (ScienceDirect: https://www.sciencedirect.com/science/article/pii/S1532046422000466)
[3] Xie B, et al. (2020). Artificial Intelligence for Caregivers of Persons With Alzheimer’s Disease and Related Dementias: Systematic Literature Review. JMIR Medical Informatics. Available: https://medinform.jmir.org/2020/8/e18189/
[4] Abd-Alrazaq A, et al. (2023). Wearable Artificial Intelligence for Anxiety and Depression: Scoping Review. Journal of Medical Internet Research. Available: https://www.jmir.org/2023/1/e42672/
[5] Abd-Alrazaq A, et al. (2023). Systematic review and meta-analysis of performance of wearable artificial intelligence in detecting and predicting depression. npj Digital Medicine. Available: https://www.nature.com/articles/s41746-023-00828-5
[6] Berridge C, et al. (2023). Technology for dementia care: what would good technology look like and do, from carers’ perspectives? BMC Geriatrics. Available: https://bmcgeriatr.biomedcentral.com/articles/10.1186/s12877-023-04530-9
[7] American Psychological Association (APA) Services. (2023). Using generic AI chatbots for mental health support: A dangerous trend. Available: https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
[8] Experiences of generative AI chatbots for mental health. (Year not specified on citation list). Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC11514308/
[9] Woebot Health. (2025). Technology Overview. Available: https://woebothealth.com/technology-overview/
[10] Ducharme J. (2023). Can AI Chatbots Ever Replace Human Therapists? TIME. Available: https://time.com/6320378/ai-therapy-chatbots/
[11] New York Academy of Sciences (NYAS). (2023). Using Artificial Intelligence and Augmented Reality to Assist Dementia Patients. Available: https://www.nyas.org/ideas-insights/blog/using-artificial-intelligence-and-augmented-reality-to-assist-dementia-patients/
[12] The Guardian. (2025). AI chatbots are becoming popular alternatives to therapy… Available: https://www.theguardian.com/australia-news/2025/aug/03/ai-chatbot-as-therapy-alternative-mental-health-crises-ntwnfb
[13] JMIR Research Protocols. (2024). Development and Evaluation of a Web-Based Platform for Personalized Educational and Professional Assistance for Dementia Caregivers: Proposal for a Mixed Methods Study. Available: https://www.researchprotocols.org/2024/1/e64127/
[14] AltumView. (2025). Sentinare product page. Available: https://www.altumview.ca/
[15] HomeGuardian. (2023). The Benefits of HomeGuardian’s AI-Based Fall Detection vs. Wearables. Available: https://www.homeguardian.ai/blog-posts/the-benefits-of-homeguardians-ai-based-fall-detection-vs-wearables
[16] HealthTech World (HTworld). (2023). AI fall detection tool reduces false alarms by a factor of a thousand. Available: https://www.htworld.co.uk/leadership/interviews/ai-fall-detection-tool-reduces-false-alarms-by-a-factor-of-a-thousand/
[17] SafeWise. (2022). What is Alexa Together? Amazon’s caregiver service. Available: https://www.safewise.com/what-is-alexa-together/
[18] Amazon Forum. Alexa Together question on fall detection. Available: https://www.amazonforum.com/s/question/0D56Q0000982U5CSAU/alexa-together-question-on-fall-detection?language=en_US
Top comments (0)