Sprint Details
The Approach
AI-Assisted Hypotheses
Forum-Based Validation
Rapid Component Prototyping
Key Insight
Reachy Mini isn't a medical device; it's a companion. Like a pet, it could provide a friendly presence that doesn't judge or hover, filling an emotional gap often left by chronic illness.
Medical Safety & Ethics
The Medical Advice Trap: During this experiment I found that AI models are prone to suggesting treatments. Enforcing strict boundaries here will be one of my key challenges as I progress through the prototype, but it is essential: Reachy Mini should never provide medical advice, infer clinical diagnoses, or recommend medication changes. It is a tool for logging and scheduled assistance, not a substitute for a medical professional.
Explore the Code
This project is fully open-source. Explore the codebase on GitHub or use DeepWiki's AI to discover implementation details.
Living with the Fog
Imagine waking up and forgetting the purpose of the device in your hand. For people living with CADASIL, cognitive decline isn't just a medical term: it's a wall. It isolates patients, exhaustively drains carers, and turns simple daily tasks into source of friction.
CADASIL Condition Snapshot
Prevalence in symptomatic patients. These figures drive the need for low cognitive load, accessible interfaces.
Sources: NCBI GeneReviews - CADASIL, NCBI StatPearls - CADASIL
A Fortnight to Build Something Real
In early 2026, I heard about the NVIDIA GTC Golden Ticket Developer Contest. Pollen Robotics and Hugging Face are among the sponsors, jointly selecting a winner for projects built around the Reachy Mini robot. The challenge was clear: build something meaningful by mid-February.
Back in October, 2025, I had already been accepted into the Pollen Robotics beta community, giving me a head-start to build, play, and provide feedback before the production ship. But two weeks is still two weeks. With no immediate access to a user group and a rare disease that fluctuates daily, I needed a way to bridge the empathy gap, fast, without sacrificing the rigour that CADASIL patients deserve.
Enter Reachy Mini Minder
I used AI to bootstrap a set of research hypotheses, grounding the results in my own lived experience as someone who has witnessed the effects of CADASIL first-hand. It wasn't about replacing humans; it was about organising empathy so I could start building a prototype that actually mattered.
The Personal Why
This project hits home. I suffer from CADASIL myself, and I saw my father pass from it before he reached 60. I understand the terrifying isolation of cognitive decline. That experience is my North Star: it helps me sift through AI suggestions to find what a family truly needs to survive the day-to-day.
The Approach: AI as a Launchpad
Instead of a traditional research phase, I used AI to analyse medical literature, forum transcripts, and accessibility guidelines. Within hours, I had generated two core user personas: Elena and Maya: that felt real enough to design for.
These weren't static profiles; they were discussion artefacts. They allowed me to start to think about whether i should prioritise screen-free interactions and kinetic alerts for users like Elena, or look at voice activated generative UI. I needed to design to ensure that the robot's physical presence was an asset, not a sensory burden.
Why Voice-First?
Migraine with aura causes visual disturbances that make bright screens painful. For CADASIL patients, this is the norm, not the exception.
Design Implication: When 80–90% of migraine episodes include visual aura, a screen-based interface becomes a barrier, not a solution. Voice-first interaction removes this friction.
Source: NCBI GeneReviews - CADASIL
Defining the Hypotheses
Elena: The Solo Navigator

"The migraines aren't just headaches; they take my words and my movement. I need Maya to know the difference."
- • CADASIL Focus: Managing post-migraine "fog"
- • Risk: "False Alarm" fatigue when symptoms flare
- • Pain point: Bright screens are physically painful
Maya: The Guardian

"I need to know if she's resting or if I need to call an ambulance from 200 miles away."
- • Elena's daughter, living remotely
- • Triage Need: Distinguishing episodes from emergencies
- • Goal: Support without "hovering" over-alerting
Why a Robot? The Embodied Advantage
A tablet app could do reminders. But for someone in cognitive "fog," the physicality of a robot changes everything.
| Feature | Static Tablet/App | Reachy Mini |
|---|---|---|
| Engagement | Passive (waits for user) | Proactive (finds/tracks user) |
| Migraine Care | Screen is a barrier/pain | Physical presence is the UI |
| Emergency | User must reach device | Device tracks user, detects event |
| Apathy Control | Easily ignored | Embodied "nudge" harder to ignore |
The Pet Parity: Presence over Utility
One of my strongest hypotheses is the "Pet Parity." For users like Elena, the isolation of cognitive decline is managed better by a presence than a tool.
Much like a loyal dog, Reachy Mini is there to acknowledge, to wait, and to offer non-judgemental support. It doesn't hover or nag. It just... is there. And sometimes, that's exactly what you need.

The emotional value of presence: a companion that's simply there.
Is a Companion Robot Justified?
1 in 4 older adults worldwide experience loneliness, a condition now called the "geriatric giant" for its links to serious health risks.
But can a robot really help? Researchers analysed 19 studies involving over 1,000 older adults to find out.
What they found
Social robots meaningfully reduce loneliness
The effect was classified as "medium to large" by research standards: strong enough to matter in people's daily lives.
Care home residents saw even greater benefits than those living independently
All types worked: pet robots, humanoids, and voice assistants all showed benefits
Studies combined
People studied
What this means for Mini Minder: The evidence supports the core hypothesis: a companion robot's presence can reduce feelings of isolation, especially for those with limited social contact.
Source: Mehrabi & Ghezelbash (2025) - Meta-analysis of social robots and loneliness
Practical Robotics: First Steps for the Fog
1. Medication Reminder Rituals
Designing for the "fog" means moving beyond simple notifications. The plan is for Reachy Mini to use voice prompts to assist with user-defined pill-taking schedules. By turning a clinical necessity into a supportive ritual, we reduce the cognitive load of memory-intensive tasks without the AI ever prescribing or recommending medication plans.
2. Voice-First Headache Journal
Since screens are painful during a CADASIL flare-up, I'm building a voice-activated journal. A simple "Hey Reachy, migraine starting" logs the event instantly. No screens, no buttons, no cognitive overhead. Just speak and it's recorded.
Validation Phase: What's Next
The next phase, which I'm currently launching, involves taking these personas and the prototype screenshots into CADASIL support forums.
I'll be asking real patients and carers: Does Elena ring true? Where did the AI miss the mark? Does the robot's physical presence actually feel less intrusive than a phone notification?
Forum Outreach
Active outreach within private Facebook CADASIL communities
Impact Report
Documenting what changed after real user feedback
Emerging Considerations
Even as I type out this case study, new questions are surfacing in my mind:
Voice-First for Vocal Exercise
For people who are isolated, a voice-first companion could provide a practical way to exercise and use vocal abilities. Worth exploring whether this has therapeutic value beyond utility.
Privacy by Design
Ensuring users are in control of their data is non-negotiable. Local processing is a start, but I need to explore what "privacy" really means for someone who might share sensitive health information with a companion robot.
These are hypotheses, not conclusions. The art of the possible with this friendly desktop companion will become clearer as I build and learn.
Honest Reflection
AI personas are dangerous if you treat them as truth. They are invaluable if you treat them as fuel for speed.
During the 2026 NVIDIA contest, this approach allowed me to move from "clueless" to a functional spec in 48 hours. It didn't replace real humans; it just made sure that when I finally spoke to them, I can now build a prototype quickly that might actually be worth their time.
Follow the Journey
This project is just getting started. As I move through design, architecture, and real-world testing, I'm planning to share the honest challenges of building assistive AI with privacy in mind: health data stored locally, transparency when using cloud AI for voice, and user control over sharing.
I'll be testing on two different Reachy Mini configurations: the wireless version (with Raspberry Pi 4 onboard) and the lite version (tethered to a MacBook). This lets me explore the tradeoffs between running local models on-device versus using cloud AI with PII redaction (via tools like Microsoft Presidio) to protect sensitive health information before it leaves the device. I'd love to hear what others are trying in this space!
Connect on LinkedIn