The question underneath the safety layer

What a behaviour change tool owes the people it affects.

BUILDING BEHAVIOURKIT

Lauren Kelly

3/13/2026

Last month I wrote about protective controls. The product feature that runs safety alongside recommendations rather than bolting warnings on afterwards. Three types. Companions, not penalties. I stand by the design.

But I want to go deeper into the question underneath it, because the feature exists for a reason that's bigger than product design.

BehaviourKit recommends behaviour change interventions. That sentence deserves to sit for a moment.

Behaviour change interventions alter what people do. When they work, someone starts doing something they weren't doing, stops doing something they were doing, or changes how they do something. The intervention acts on a person. The person may or may not have consented to being acted upon. They may or may not understand the mechanism being used on them. They may or may not benefit from the change.

That's an ethical situation. And any tool that makes it easier to deploy behaviour change interventions has a responsibility to engage with the ethics, not just the mechanics.

The behavioural science field has been wrestling with this for years. The "nudge" framework from Thaler and Sunstein drew a line between "libertarian paternalism" (shaping choices while preserving freedom) and coercion (removing choices entirely). That line has been debated, contested, and redrawn many times since. The dark patterns conversation in UX design pushed the question further: at what point does a well-designed choice architecture become manipulation?

I don't think there are clean answers. But I think a responsible tool needs to engage with the questions rather than pretending they don't exist. Here's how I think about it for BehaviourKit.

The system should never make it easier to harm people than to help them. This is the foundational principle. Protective controls exist to catch recommendations that could cause harm even when the user's intentions are good. Blocking public-visibility plays when the constraint is about exposure and judgement. Requiring confirmation before deploying interventions in psychologically sensitive contexts. Monitoring recommendations for unintended consequences.

The protect layer isn't optional. It runs automatically. You can't turn it off. The system has a duty of care that overrides user convenience.

The system should make its reasoning transparent. When BehaviourKit recommends an intervention, the user should be able to see exactly why. What driver was identified. What mechanism connects the driver to the recommendation. What evidence supports the connection. What risks the system has flagged. This transparency serves two purposes: it helps the user make an informed choice about whether to proceed, and it creates accountability. A recommendation with visible reasoning can be questioned, challenged, and improved. An opaque recommendation can't.

The system should respect the people being changed, not just the people doing the changing. BehaviourKit's user is typically someone designing or managing a change programme. They're the customer. But the people affected by the intervention are a different group. The employees being asked to adopt a new process. The residents being encouraged to change their energy use. The patients being nudged toward medication adherence. Those people are not in the room. They don't have a seat at the table. The protect layer speaks for them, in a limited way, by flagging when an intervention risks reducing their autonomy, exposing them to judgement, or creating pressure they haven't consented to.

"Effective" and "ethical" are not the same thing. An intervention can work brilliantly and still be wrong. Social pressure can drive behaviour change very effectively and also make people miserable. Shame-based messaging can reduce a target behaviour and also damage mental health. The system needs to distinguish between interventions that are effective and interventions that are effective and appropriate. The protective controls layer is where that distinction lives in the product.

I don't think BehaviourKit can solve the ethics of behaviour change. Philosophers and policy-makers have been working on that for decades and the conversation is far from settled. What the product can do is take a position: the system has a responsibility to prevent foreseeable harm, to be transparent about its reasoning, and to consider the experience of the people being changed, not just the goals of the people doing the changing.

That position is embedded in the architecture, not appended as a disclaimer. Blocked routes. Mandatory safeguards. Visible reasoning. Confidence honesty. These aren't features. They're values, expressed as product decisions.

Whether I've got the balance right is something I'll keep questioning. The ethical landscape of behaviour change is not static, and neither should the system's response to it be. But I'd rather build a tool that engages with these questions imperfectly than one that ignores them elegantly.

Go deeper into the Building BehaviourKit series: