Can I call this an ontology?

Checking the label against the requirements, honestly.

BUILDING BEHAVIOURKIT

Lauren Kelly

2/20/2026

I've been using the word "ontology" to describe what BehaviourKit is becoming. The typed constructs. The conditional routes. The inference rules. The protective controls. The confidence levels. It feels like the right word. But I want to interrogate that feeling, because using a term from knowledge engineering and philosophy carries a certain weight, and I should know whether I'm earning it or borrowing it.

In the formal sense, an ontology is a structured representation of knowledge within a domain. It defines: what types of things exist, what properties those things have, what relationships hold between them, and what rules constrain those relationships. If a taxonomy is a filing system (everything in its place, clearly labelled), an ontology is a filing system that knows what belongs next to what, what can't go together, and what follows from what.

Let me check BehaviourKit against those requirements.

Types of things that exist: the system now has four construct types (behavioural drivers, structural constraints, social signals, contextual conditions), plus levers, tactics, plays, protective controls, and evidence items. Each type has defined properties and behaves differently in the routing logic. That's a typed entity system. Tick.

Properties of those things: each construct has a definition, a state model rule (symmetric, low-only, backfire, or none), a confidence basis, a stability level, and boundary notes. Those are formally defined attributes, not just free-text descriptions. Another tick.

Relationships between things: constructs connect to levers through route conditions. Levers connect to tactics. Tactics connect to plays. Each connection carries metadata: fit quality, confidence basis, mechanism logic. The relationships are explicit, typed, and conditional. Tick.

Rules constraining those relationships: protective controls block incompatible routes. State model rules determine which intervention logic applies. Construct type determines which lever families are available. Confidence basis affects how recommendations are ranked. These are inference rules: if X is this type, in this state, then Y is valid and Z is not. Tick, with a caveat.

The caveat matters. A formal ontology in the knowledge engineering sense (the kind you'd build in OWL or RDF, the kind the Semantic Web was supposed to run on) has machine-readable inference rules that a reasoner can process automatically. BehaviourKit's rules are encoded in database logic and routing conditions, not in a formal ontology language. A knowledge engineer would call this a "lightweight ontology" or an "application ontology" rather than a formal one.

Would a behavioural scientist accept it as an ontology? That depends on which behavioural scientist you ask, and what they mean by the term.

A behavioural scientist working in taxonomy development (and there are several excellent ones, particularly in the behaviour change intervention space) would look at the typed constructs, the mechanism logic, the evidence traceability, and the conditional routing and recognise it as ontological work. The Behaviour Change Technique Taxonomy, for example, is a classification system that's moving in a similar direction: from flat lists of techniques toward structured relationships between techniques, mechanisms, and contexts.

A behavioural scientist working in computational modelling might want more formal specification. Defined axioms. Logical consistency checking. Automated inference. BehaviourKit doesn't have those yet. The rules are explicit but they're encoded in application logic rather than in a formal knowledge representation language.

A behavioural scientist working in applied practice might not care about the formal label at all. They'd want to know: does this system help me make better decisions about behaviour change? Does it route me to interventions that are mechanistically appropriate? Does it warn me when the evidence is thin? Does it handle the difference between a person-level problem and a structural constraint? If yes, they'd use it regardless of whether it qualifies as a "proper" ontology.

Here's my honest assessment. BehaviourKit is further along the taxonomy-to-ontology spectrum than most applied tools in the behavioural science space. It has typed entities, conditional relationships, inference rules, and constraint logic. It distinguishes between types of things in ways that affect how the system reasons about them.

It is not a formal ontology in the computational sense. The rules aren't expressed in a formal language. There's no automated reasoner checking logical consistency. The system doesn't generate new inferences from its rules without human involvement.

What it is, I think, is an applied ontology. A structured knowledge system that encodes domain expertise about behaviour change into typed constructs, conditional routes, and explicit rules, in a way that supports automated reasoning at a practical level. The reasoning is constrained and specific (given this input, recommend this output, with this confidence, under these conditions) rather than open-ended and generative.

Is that enough to use the word? I think so, with the qualification that I'm describing an applied ontology for a practical system, not a formal ontology for academic knowledge representation. The word describes what the system does more accurately than "taxonomy" or "database" or "framework." It captures the conditional, rule-based, type-aware reasoning that makes BehaviourKit different from a flat classification system.

But I hold the term lightly. If a better word comes along, or if the formal ontology community tells me I'm stretching the definition past its useful limits, I'll adjust. The system is what it is regardless of what I call it. The label should serve the understanding, not the other way around.

Go deeper into the Building BehaviourKit series: