Action planning is everywhere in digital health, and most of the time it's wrong.
BCT 1.4 in Michie's taxonomy is defined roughly as prompting detailed planning of the performance of a behavior, including at minimum context (when, where) and ideally the frequency, duration, and intensity. In plainer English: you help the user specify when and where they'll do the thing. It's the behavioral techniquy most product teams reach for when they build a habit feature. "Tell us when you want to do this. We'll remind you." Done.
The problem is that what ships usually isn't BCT 1.4. It's a reminder scheduler with a sticker of action planning on it. And the evidence for BCT 1.4 is specifically about the planning, not about the reminder. If your feature is basically "let the user pick a time," you're not delivering the active ingredient. You're shipping a shell.
I want to walk through what the research actually says, what a faithful implementation looks like, and where most digital versions go sideways.
What the evidence says about the active ingredient
The academic ancestor of BCT 1.4 is Gollwitzer's work on implementation intentions. If-then plans. "If situation X arises, I will perform behavior Y." The 2006 Gollwitzer and Sheeran meta-analysis covered 94 independent tests and found a medium-to-large effect (d = 0.65) on goal attainment. A much more comprehensive 2024 meta-analysis across 642 tests found effects in the d = 0.27 to 0.66 range across cognitive, affective, and behavioral outcomes. Larger effects when plans had contingent if-then format, when the person was motivated, and when plans were rehearsed.
Three things matter here, and all three get lost in most product implementations.
One, the plan is in an if-then form. The "if" is a specific situational cue. The "then" is a specific action. "If I finish dinner, then I will take my pill." Not "I will take my pill at 8pm."
Two, the plan is constructed by the user. Not picked from a dropdown. Generated as a specific sentence that ties a real situation in the user's life to a real action.
Three, the plan gets rehearsed. Mental simulation strengthens the if-then link. One read of the plan and a save button is not rehearsal.
If you pattern match this against what health apps typically ship under the name "action plan" or "reminder setup," you'll notice that almost none of them do any of the three.
Where digital implementations go sideways
The canonical bad version, which I've seen in at least six different products, looks like this. Onboarding asks "When do you want to be reminded?" The user picks a time. The app stores that time and fires a notification. Done.
That's BCT 7.1 (prompts and cues) dressed up as BCT 1.4. It's a fine BCT in its own right, but it's doing different work and it has different evidence behind it. You're not helping the user form an if-then plan; you're helping them set an alarm. The if-then link they form in their head, if they form one at all, is "if my phone buzzes, then I might do the thing." That's fragile. It breaks the first time the phone is on silent, which for the demographics most health apps target is most evenings.
A slightly better version asks for a time and a location. "When are you going to do this, and where?" That's closer. But it still skips the critical piece, which is that the user constructs the plan sentence themselves, and the sentence anchors the behavior to an existing situational cue they'll actually encounter.
The common mistake I see in teams that know the theory is to assume if-then is just a syntactic rewrite you apply to the sentence. "I will take my pill at 8pm" becomes "If it is 8pm, then I will take my pill." That's still an alarm. The if needs to be a naturalistic cue in the user's environment, not a clock reading. "If I pour my morning coffee, then I will take my statin." That's the form of the plan that actually works.
What a working BCT 1.4 implementation looks like
I'm going to sketch what I'd actually build, which is a three-step interaction that takes maybe 90 seconds and looks nothing like a settings page.
Step one: anchor to an existing cue. Ask the user what they reliably do every day near the time the behavior should happen. Not what they want to do, what they already do. Pouring coffee. Sitting down for dinner. Closing the laptop. Picking up the kids. The product's job here is to get the user to generate a specific situation they encounter without needing any reminder, because that situation already exists in their daily loop.
Step two: construct the if-then sentence. Show them the sentence in if-then form with their cue filled in: "If I finish breakfast, then I will take my pill." Let them edit. If they want to add time, location, or quantity, let them. The output is a plan they wrote in their own words, not a template they accepted.
Step three: rehearse. This is the part almost no product does, and it's the one with the most literature behind it. Have the user imagine the situation happening. Where are they? What does the moment feel like? Then have them mentally rehearse doing the behavior in that moment. Twenty seconds is enough. A tiny visualization prompt in the UI can run this. Then have them say the plan once out loud, or at minimum read it aloud on screen. Rehearsal is what binds the cue to the action in memory. It's the difference between a plan that gets recalled at the critical moment and a plan that gets recorded but forgotten.
If you've done those three steps, you've delivered BCT 1.4. If you then also fire a prompt at the scheduled time, great, you've added BCT 7.1 as insurance. But the plan is doing most of the work, not the prompt.
Nuances the literature makes clear
A few things I'd factor in based on what the meta-analytic evidence actually says.
First, motivation matters. Implementation intentions work best for people who already want to do the behavior and are getting stuck on follow-through. If your user doesn't want to do the thing, BCT 1.4 alone won't fix it; you need to combine it with motivation-focused components (BCTs in cluster 9, like pros and cons) or reflective work. This is consistent with the recent 2026 Coach factorial, where motivation enhancement was the lead ingredient for smoking cessation specifically, with planning as a secondary.
Second, a 2025 study by Ahmadyar and colleagues on weight management strategies (N=200, DOI 10.2196/65260) found that implementation intentions beat plain tips only for people with poorer planning skills. For skilled planners, tips were slightly better. The implication for product is that BCT 1.4 should probably be adaptively offered, not shoved into every onboarding flow. Users who already plan well don't need your if-then scaffolding; users who don't, desperately do.
Third, coping planning (BCT 1.2) is often more useful than action planning alone. Plan what you'll do if the cue fires but the behavior can't happen as intended. "If I finish breakfast but I'm running late, then I'll take my pill with me and have it in the car." The literature treats these as separate BCTs for a reason. Most product teams collapse them into a single planning flow and miss the resilience benefit.
Fourth, the plan has a shelf life. Habits shift. The cue you picked in January might not exist in April. A working BCT 1.4 implementation revisits the plan periodically and asks if the cue is still reliable. Nobody ships this. It's a real gap.
A working test
If you want to check whether your app is actually doing BCT 1.4 or pretending, ask a user who's been using the product for three weeks to tell you, in their own words, what the plan is. They should be able to say something like "if I get home from work, then I do my breathing exercise." If they say "I get a reminder at 6pm," you've shipped a reminder. That's a fine thing to have, but it isn't the technique you thought you were applying, and you shouldn't expect the effect sizes the literature attributes to BCT 1.4.
Nothing about this is expensive. The three-step interaction is 90 seconds of UI. The value is that it turns a theoretical active ingredient into a real active ingredient, which is the whole point of using the taxonomy in the first place. Otherwise you're just naming your features after BCTs to sound evidence-based, which is more common in this space than anyone wants to admit.