The UK isn’t short of ideas in health innovation — but many don’t reach their full potential. According to one 2025 UCLPartners report, just 28% of digital health innovations assessed in the NHS have been successfully procured and scaled.
Five common pitfalls prevent promising innovations from progressing beyond the pilot stage:
- Structural headwinds. Health systems are under intense pressure: entrenched silos, constrained resources, and short-termism have created a challenging backdrop for innovation.
For example, a Health Foundation review found that local implementation hubs perceived their systems as lacking a shared vision, innovation capability and infrastructure; without system readiness, even strong ideas cannot be adopted at scale.
- No shared logic. When providers, commissioners, participants and innovators do not share a clear logic model – meaning the purpose isn’t consistently understood – alignment suffers.
Misalignment of stakeholder expectations are, as one study found, a key limiting factor in health service innovation. In effect: if everyone isn’t aligned early, buy-in becomes ad-hoc, not intentional.
- Fuzzy success & late evaluation. Too often, interventions are launched without a clear success framework, missing baselines and with evaluation anafterthought. Yet without robust evidence it’s impossible to decide whether to scale or stop.
“Without evaluative evidence,” as the NHS Strategy Unit puts it, “all we have is advocates and champions. Without evidence, we don’t know whether an innovation should be scaled up and spread.” The AHSN network’s guide to evaluation emphasises that evaluation should start before or as the innovation is deployed, not as a bolt-on afterwards.
- Poor system fit. Even a great innovation may falter if the host system — workforce, processes, funding, and pathways — can’t absorb it. One review of pilot-based evidence to scalefound that key challenges include configuring to context and transitioning evidence so the system is ready. If deployment fails to consider how the innovation sits in the system, the fit will be poor and spread is unlikely.
- “Pilotitis” – short-term pilots with no sustainability route. Many innovations are given a short-term funding window and no embedded plan for sustainability or scale. The result is a promising pilot which may look good in isolation but disappears once the project funding ends. This is a widely reported barrier to scale.
How to create the honest evaluation that transforms outcomes
So, if the problems are well-known, what practical steps can leaders take to unlock the potential of innovation? Honest, built-in evaluation is a critical differentiator.
- Evaluation built-in by design from day zero. Start with an impact-first logic model: define the aims → determine the measures → set a review cadence. Establish explicit success criteria and baselines. The AHSN practical guide emphasises exactly this: “The outcomes you perceive as important may not always resonate with your potential NHS customers… the outcomes that your innovation will produce need to be identified from the start.” By planning evaluation from the start, you avoid unclear metrics and the “we’ll measure later” trap.
- Quantify outcomes and capture individual experiences. Evaluation should capture quantitative (outcomes, utilisation, cost) and qualitative (staff/participant experience, unintended effects) data.
As a recent article from the regional health innovation network puts it, a “broad mix of methods — quantitative and qualitative [are vital to success] including surveys, interviews, ethnographic research … economic evaluation.” By combining both quantitative and qualitative methods, you build a richer picture of how the innovation works — and where it doesn’t.
- Run learning loops. Be transparent about what works and what doesn’t: use frequent reviews to pivot, refine or stop. Evaluation shouldn’t be a one-time checkbox at the end. “A real-world evaluation should,” as the Health Innovation Network says, “not be a one-off project with a completion date or endpoint, but the start of a continuing process of developing evidence as your innovation is more widely adopted”. Build in iteration, feedback, and continuous learning.
- Co-produce evaluation with all stakeholders. Involve participants, clinicians, managers, and commissioners early. Their input ensures that the evidence reflects real-world constraints and adoption pathways. When stakeholders are part of evaluation design, you build shared logic, alignment and commitment; people who help define “good” are far more likely to deliver it.
- Design for scale & staying power. Scaling requires credible evidence and a feasible deployment model in the system. From the start, plan not just for pilot success but for embedding and spread. Turn results into a credible business case. Resist the “success theatre” of superficial reporting that hides limited impact. Translate your results into a minimal viable specification for spread – datasets, roles, training, SOPs, costs – and publish a commissioning-ready evidence pack.
Lead with evidence to create the signals for scale.
When evaluation is built in, honest, and rigorous, leaders unlock major opportunities. With credible evidence and system fit, innovations move beyond one-off pilots to adoption at scale.
Leaders who surface honest results (including where things didn’t work) become trusted voices for implementation decisions, rather than cheerleaders. This builds influence and credibility.
Early signals reduce dead-ends, build organisational memory, strengthen commissioner confidence and cultivate a culture of continuous improvement.
And with investors and commissioners looking for credible evidence of benefit in real-world settings, robust impact data creates new routes to funding — public, charitable and private.
In brief: better evaluation upfront means fewer “what now?” moments later.
Escaping the pilot trap
If you’re responsible for scaling innovation in the UK health system – whether as a developer, evaluator, commissioner or provider – the message is clear: innovation without built-in, honest evaluation is high risk.
But with honest evaluation – to define success early, measure what matters, learn fast, involve stakeholders, and plan for scale – the chances of adoption at scale hugely increase. Future Care Capital can help. If you’re planning or delivering an innovation programme, we’d love to explore how an evaluation like this could help you learn and adapt. Get in touch with Andy Jones at andy@futurecarecapital.org.uk.
Andy Jones is Evaluation Lead at Future Care Capital.