latest
There’s a widening gap in health innovation: between efficacy (does this new model work on paper?) and adoption (can a human team actually deliver it in a crisis?).
One 2025 UCLPartners report on barriers to innovation found that of 50 innovations assessed, just 28% were successfully procured and scaled. Most often, the challenges are in implementation, workforce and pathway integration rather than a lack of clinical evidence.
We logically assume that if an innovation saves money and improves outcomes, adoption is inevitable. But the history of the NHS is littered with “proven” pilots that failed to scale. Why? Because we measure the innovation itself, but not the environment.
From an innovation management perspective, friction is not just an operational issue. It’s the primary determinant of scalability, investability and system confidence. An innovation that performs clinically but increases cognitive load, erodes trust or undermines professional judgement will not scale, regardless of its evidence base.
It’s easy to fall into the trap of imagining frontline staff’s routines as blank slates, ready for a new process. In reality, they’re navigating a high-pressure, high-burnout environment where their cognitive bandwidth is a scarce resource. To scale innovation successfully, we need to stop auditing only the clinical outcomes, and start auditing the friction of the change itself.
The widely recognised behavioural science concept of loss aversion tells us people fear “losses” (time/effort) more than they value “gains.” It’s a stronger emotion to realise you’ve dropped a twenty pound note on the way into work than if you find one dropped by someone else.
In healthcare, losses manifest as resistance to any process that breaks “flow”. Context switching comes with a mental cost. Let’s take, say, a GP referring a patient to a new Community Diagnostic Centre (CDC). The pathway is clinically superior. But if the referral requires the GP to leave their primary system, log into a separate portal, and manually re-enter patient data, the innovation has imposed a “Cognitive Switching Cost.”
Multiple NHS evaluations of electronic referral and advice and guidance systems have shown that uptake drops sharply when clinicians are required to leave their core system or duplicate data entry, even where the downstream service is demonstrably better.
Even if the new task takes only two minutes, the shift in focus depletes the clinician’s already limited energy for decision-making. As a result, they’re likely to make fewer referrals than they otherwise would. It’s a subconscious process. They aren’t resisting the care — they’re resisting the friction.
So how do we evaluate this kind of cognitive friction? Don’t just measure “Time to Refer.” Measure “Resumption Time.”
Research shows humans are incredibly unforgiving of new systems. We forgive people for mistakes, but we abandon new processes after a single failure.
Imagine a new Rapid Response Community Team designed to keep frail patients out of hospital.
If a clinician refers a patient on Tuesday, and the team fails to arrive or lacks capacity, the clinician doesn’t just lose faith in that specific referral. They lose faith in the entire pathway.
When a human colleague fails, we rationalise it (“They were understaffed”). When a system fails, we view it as fundamentally broken. The clinician may immediately revert to their old, “safe” behaviour (calling an ambulance or sending to A&E), often for months.
Early virtual ward deployments demonstrated this clearly. Where services were consistently responsive, clinician confidence and referrals grew. Where capacity constraints led to delays or missed contacts, referral behaviour dropped sharply, even after services recovered.
So don’t measure “Average Success Rate.” Measure the “Post-Failure Reversion Rate.”
Staff often reject new protocols — like a standardised wound care pathway — even when the data proves it is better. This is often blamed on “culture,” but it is actually about Explanatory Depth: we value “Justifiability” over “Efficiency”.
Clinicians — like all humans — think in narratives. They have a mental model of why they treat a patient a certain way.
Friction arises when an innovation dictates what to do (e.g., “Use this dressing,” “Discharge to this team”) without providing the why that aligns with their professional intuition.
If a new AI triage tool or care pathway feels like a black box — instructions without a clinical rationale — staff will override it to regain a sense of professional agency.
Evaluations of AI-supported decision tools show that clinicians are significantly more likely to follow recommendations when the system explains its reasoning in clinically meaningful terms. Where outputs are opaque, override rates increase, even when accuracy is high.
To prevent this, don’t just test the outcome; test the “Narrative Fit.”
To move from “Pilot Success” to “System Success,” evaluators should score innovation against these three barriers:
You can’t “nudge” a healthcare workforce into using a system that fights their workflow. You must design the system to fit the flow. Adoption isn’t a training challenge; it is a design challenge. FCC can help. If you’re looking to scale innovation sustainably but not sure where to start, contact Dr Lauren Evans at lauren@futurecarecapital.org.uk.