I spent part of last month reviewing a medication reminder app a founder sent me. It had reasonable onboarding, clean notifications, a neat streak counter, and a retention curve that fell off a cliff at around day 12. The founder wanted to know what was wrong with the UX. I think the UX was mostly fine. The problem was that the team had built for one of the three things that actually matter, and they'd done it very well, so they were convinced the other two weren't the bottleneck.
This is what happens when COM-B lives on a training slide instead of in the build.
The one behavior
Let me narrow this down to a single behavior, because COM-B only works if you do. The app wanted its users (mostly adults over 55, on a daily blood-pressure medication) to take one pill at 8pm every evening. That's it. One behavior, one time of day, one pill. Everything else the app did was in service of that.
I'm going to walk through what the team built for that behavior, and then what they ignored, using Capability, Opportunity, and Motivation as the three lenses.
What they built: the Capability layer
Capability in COM-B is whether the person has the physical and psychological ability to do the behavior. Do they know how? Do they have the dexterity? Do they understand what's being asked?
The team did a competent job here. The app had a large-print medication list. The reminder notification said "Time to take your lisinopril" in plain language, not "Medication event scheduled." There was a one-tap "Taken" button on the notification itself, which meant the user didn't have to open the app to log the dose. Onboarding had an optional video walkthrough. Accessibility was above average for health-tech, which is a low bar but still.
In usability testing, people nailed the task. Five out of five participants, first try, no confusion. The team took that as a green light. On launch week, activation was strong. People set up their medication, took the first few doses, and logged them.
Then day 12 happened. Roughly 40% of the cohort stopped logging doses. By day 30, it was closer to 65%.
The cliff
When I see a cliff like this, I stop looking at the onboarding. Onboarding is a Capability problem, and these users had proven they had the Capability. They logged a dose on day 2 and day 5. They knew how. The cliff is almost always Opportunity or Motivation erosion, and usually it's both.
So I did what I wish more product teams did at this point: I opened the BehaviorUX canvas (or pen and paper, same thing) and wrote out the other two columns.
The Opportunity audit
Opportunity is everything outside the person's head that enables or blocks the behavior. Physical environment, social environment, time of day, the presence or absence of the triggering cue.
Here's what the app had not accounted for, in rough order of how often I saw it matter:
The 8pm reminder assumed the phone was charged, nearby, and not on silent. For this demographic, at 8pm, the phone is often on a kitchen counter, plugged in, ringer off because they've just eaten dinner and are watching TV. The notification arrives and nobody sees it until 10:30pm when they go to charge it in the bedroom. By then the reminder is one of fourteen notifications stacked together, and it looks the same as the rest.
The pill itself lived in a cabinet in a different room from where the phone usually ended up. The physical distance between "I got the reminder" and "the pill is in my hand" was twenty feet, a staircase, and often a decision to stand up.
A significant number of the users lived with a spouse. In roughly half of those households, the spouse was also on medication, sometimes the same one. The app had no model of this. Two people would get independent reminders at the same time, from separate phones, and the one who noticed first would sometimes say "I got it" out loud, and then neither of them would actually take their pill, because the social signal was confused.
None of those are Capability problems. No amount of clearer onboarding would have fixed any of them. They were environmental, and they were invisible to the team because the team had tested the app in an office, not a living room.
The Motivation audit
Motivation is reflective (do I believe this is worth doing?) and automatic (does it feel rewarding or aversive in the moment?). COM-B splits these because they fail in different ways, and a product can address one without touching the other.
On the reflective side, users over 55 on a daily blood-pressure medication had generally been told by their doctor that the medication was important. Most of them believed this. But belief in the abstract importance of a behavior does not reliably produce the behavior, because the behavior happens every night and the health consequence is delayed by years. Reflective motivation is a pretty weak force against "I'm tired and I don't want to walk to the bathroom."
On the automatic side, the app gave almost nothing back. The streak counter incremented silently. There was no warmth in the logging confirmation. The notification felt like a bill. One user I interviewed said "it's like a nag from a machine that doesn't know me." She said it with real affection for the app, which I thought was telling. She liked the idea of the app. The app itself made her feel slightly worse every time she interacted with it.
More interesting: several users told me they felt guilty when they missed a dose, and the app amplified that guilt by greying out the day and resetting the streak. This is a common pattern in habit apps and it is a motivation killer. Guilt is a short-term behavior prod and a long-term avoidance trigger. By day 12, enough guilt had accumulated that opening the app felt worse than just deleting it.
What a COM-B informed version would look like
If the same team had run the canvas before building, I think the feature set would have looked different. Off the top of my head:
The reminder system would ask the user where the phone usually lives at 8pm, and suggest either a smart speaker integration, a wall clock chime, or a partner's phone as a backup channel. That addresses the Opportunity gap around notification reach.
There would be a question during setup about whether the user lives with somebody also on medication, and if so, a paired-reminder mode that disambiguates "my pill" from "your pill" clearly. That addresses the social cue confusion.
The dose-logging confirmation would be warmer and specific. Not a badge. A sentence that acknowledges what the user actually did, in language that sounds like a person. The streak reset would be gentler. Missing a day would not erase the history of the previous eleven. That's not about making the UI nicer. It's about reducing the shame penalty that kills motivation.
And there'd be a once-a-week reflective prompt tied to something the user cares about. "Your blood pressure has been in range for the last two weeks. That's because of this." Reflective motivation needs fuel, and health-tech almost always starves it.
None of that requires a redesign. It requires asking about the other two columns before building.
Closer
Every product team that says "we do user research" would do better if they used a framework that forced them to ask about all three levers, not just the one they know how to fix. Capability is the lever product people are trained to see. It's what usability testing measures. It's what design critique notices. The other two live in the user's environment and in their head, and you only find them by asking, deliberately, what's happening there. COM-B is the simplest tool I know for forcing that question. Twenty minutes with the canvas before a sprint beats twenty usability tests after launch. The team I was reviewing eventually rebuilt a version of the app using this kind of thinking, and the day-30 number roughly tripled. Not because the UX got better. Because it started acknowledging that the user had a life outside the app.