There's a particular failure mode in healthcare and fitness software that we've named internally: the taxonomy problem. It's what happens when engineers build software for practitioners.
Engineers love taxonomies. They want a hierarchy: category → subcategory → item. They want everything labelled, classified, and uniquely identified before it goes anywhere near a database. This makes sense — a clean schema is a joy to work with. The problem is that practitioners don't think in taxonomies.
A physio knows that "hip flexor tightness" isn't just a movement quality finding. It's a pattern they've seen in the last six desk workers they've assessed, and it goes with a specific set of programming considerations, and this particular client also has a history of lower back complaints that makes it more important than it would be otherwise. That's not a node in a tree. That's context — living, relational, accumulated.
When you force practitioners to interact with a taxonomy interface — "select movement quality category → select finding → severity 1–5 → save" — you're not capturing their knowledge. You're flattening it.
The pattern-matching model
Good practitioners pattern-match. They see a client move, and simultaneously they're pulling up a library of similar clients, similar presentations, similar outcomes. The insight isn't one data point — it's the gestalt of many.
Software that serves practitioners needs to support this mode of thinking. That means:
Flexible capture over forced structure. Notes should be fast, free-form, and searchable — with structure applied later, when it helps. The practitioner shouldn't have to stop the conversation to navigate a dropdown hierarchy. They should be able to write "hip flexors still tight, low back improved significantly since load reduction last week" and have that be useful.
Surface patterns, don't just store data. The software becomes valuable when it shows the practitioner something they couldn't see otherwise — "the last 12 clients with this presentation averaged 6 weeks to symptom resolution with X approach; this client is at week 4" — not when it's a compliant record-keeping system.
Decisions, not dashboards. Dashboards are the engineer's solution to "what should we show?" The practitioner's actual problem is: "what should I know right now, walking into this room?" Those are different questions. The answer to the second one is contextual, curated, and timely. Not a grid of charts.
Mobile-first isn't a checkbox
"Mobile-first" has become a checkbox item in software specs — something you put in the brief to sound credible. For most clinic software, it means "we made a mobile app, and it technically works, but everyone prefers the desktop."
Actual mobile-first design — designed for how practitioners use their phones in a clinical or gym setting — is fundamentally different.
When a coach is at the rack, they're standing, one hand sometimes occupied, glancing at a screen between sets. The interface has to be immediately legible. The most common action has to be reachable without navigating. Logging a set can't require three taps and a confirmation dialog.
When a physio is in a session with a client, they might pull out their phone to log a finding or check the last session's notes. They need the relevant information immediately, without navigating — and they need to capture their observation in under five seconds.
Most clinic software fails both of these. The mobile interface is a stripped-down version of the desktop, with the same mental model, just on a smaller screen. That's not mobile-first. That's mobile-parity.
We've spent significant time thinking about session logging for AiStrength specifically, because the use case is so demanding — a person in the middle of training, between sets, with limited cognitive bandwidth available. Every element has to justify its presence. Rest timer, set logging, one-tap navigation. Nothing else.
Why most clinic software is actively hostile
Beyond the taxonomy problem and the mobile problem, there's a more uncomfortable truth: a lot of clinic software is implicitly hostile to practitioners.
It's hostile in the sense that it treats compliance as the primary output. The software is designed to produce an audit-ready record, and the practitioner's job is to fill in the fields. This is backwards. The practitioner's job is clinical — the software's job is to support it.
The record should be a byproduct of a well-supported clinical interaction, not the primary product that the practitioner must serve.
This leads to perverse outcomes. Practitioners spend clinical time filling in documentation. Session notes get written after the client leaves, from memory. Outcome measures get administered when the software prompts them, not when they'd be clinically useful.
When we designed the assessment flow for AIMS, we started from a different premise: what would a practitioner naturally do in this situation, and how do we make the software invisible to that process? Not "what fields do we need to populate?"
The difference shows up in a hundred small choices. Does the software interrupt the practitioner to force a required field, or does it let them continue and flag the gap later? Does it ask the practitioner to classify a finding before they're sure what it is, or does it let them note an observation and revisit the classification? Does the mobile interface match the cognitive load of the context, or does it demand more of a person who's already managing a client interaction?
Opinionated defaults over infinite flexibility
One more principle worth naming: good practitioner tools are opinionated.
There's a temptation in software — especially B2B software where you're pitching to people who've been burned by systems that didn't fit their workflow — to make everything configurable. Custom fields everywhere. Flexible workflows. "It does whatever you need."
In practice, infinite configurability produces decision fatigue, inconsistent usage, and software that no one on the team knows how to use the same way.
We'd rather make a considered default that's right for 80% of cases, documented clearly, and changeable when the team genuinely needs to change it. Periodisation templates, outcome measure sets, progression models — these should come pre-configured with sensible defaults based on the evidence. The practitioner shouldn't have to build their own periodisation model from scratch. That's not flexibility, that's friction.
When the default is wrong, it should be changeable. But the default should exist, it should be justified, and it should save most practitioners from ever having to think about it.
This is ultimately the premise behind AIMS: that the framework for assessment and programming has already been worked out by decades of evidence and coaching practice, and the software's job is to make that framework accessible — not to make practitioners build it from scratch every time.
If that sounds opinionated, good. Software that tries to be neutral is usually just software that avoids making decisions. Practitioners already have enough decisions to make.