Most of the health products I've worked on have a cliff. Day 10, give or take. You can see it in the cohort charts if you squint. Signups look healthy, the first-session data is great, and then a chunk of people quietly disappear and don't come back.
The instinct is to blame onboarding. I've done it myself. You add another tooltip, rewrite the welcome screen, run another round of usability tests, and the cliff doesn't really move.
What I think is actually going on: the first week is novelty. The product is new, the behavior you're asking for is vaguely interesting, and people are running on curiosity. Around day 8 or 10 the novelty budget runs out, and whatever is left has to be the actual reason the person keeps using it. If you haven't built a real reason in, that's the point they leave.
This is where behavioral science is useful in a way usability testing is not. A usability test tells you whether someone can use your product. It doesn't tell you whether they'll want to when the novelty dies. Those are two different questions, and they don't share answers.
The COM-B model is a useful lens for the second question. You ask, for each behavior your product depends on: does the person still have the capability, the opportunity, and the motivation to do it after day 10? Usually one of the three has quietly eroded, and that's your cliff.
I'll write up a few of the specific patterns I keep seeing in a follow-up. For now the short version is: if your adoption curve has a cliff, don't reach for more usability testing. Reach for a behavior model.