latest
The roadmap from successful health innovation to adoption at scale is fraught with friction. That journey takes, according to Heriot-Watt University, 17 years on average in the UK — innovations developed now may not reach patients until the 2040s.
A large part of this friction comes down to how we measure success. Too often, we’re trying to evaluate complex, evolving services using static, linear metrics designed for mature programmes. If we want to build a self-evolving health system, we need an evaluation framework that evolves with it.
Most stakeholders face a fundamental mismatch in timing. Complex innovations like, say, a complex remote monitoring platform, might need 24 months to demonstrate a “hard” ROI like a definitive drop in A&E admissions. But pilot funding cycles often demand proof of value within 6 to 12 months.
This creates an “attribution challenge”. In a system as complex as the NHS, proving that one specific digital tool caused a drop in readmissions is incredibly hard when staffing levels, seasonality, and community support are changing at the same time.
Take a new Remote Monitoring Platform for COPD. If we evaluate it solely on “reduced hospital bed days” in the first six months, the data might be too noisy to show a clear trend, making the pilot look like a failure. But the real value in that early phase might be found in increased patient confidence or faster triage speeds — metrics that strongly predict long-term success but are often overlooked in traditional audits.
To bridge the gap between a promising pilot and a scalable solution, we need to shift our focus from strictly auditing results to evaluating the conditions for success. This approach, often called “Developmental Evaluation,” focuses on three core shifts.
1. Assessing the “Soil” Alongside the “Seed”
Before we ask, “Did the tool work?”, we have to ask, “Is the environment ready to support it?” This helps us tell the difference between a flawed product and an unprepared system.
We need to look closely at workflow integration. Instead of, say, tracking logins, we should measure how many hours per week of clinician capacity an innovation frees up. If a Primary Care triage AI, for example, is clinically accurate but increases the Practice Manager’s admin burden by adding clicks, adoption is going to stall. If a mental health app requires a GP to manually copy-paste data into the Electronic Patient Record (EPR), the lack of integration is the barrier, not the therapy itself.
2. Balancing Lagging Indicators with Leading Indicators
Lagging indicators (outcomes) tell us where we’ve been, while leading indicators (behaviours) tell us where we’re going. A balanced scorecard for health innovation needs both.
For a Diabetes prevention programme, for example, weight loss is a lagging indicator which takes months to show up. A valid leading indicator, however, might be “attendance rates at week 3” or “peer-support interactions.”
High interaction now is a strong predictor of weight loss later; the same logic applies to staff advocacy. In a hospital ward introducing digital vitals monitoring, are the nurses recommending it to agency staff? If the workforce trusts the tool, safety improvements usually follow.
3. Implementing Short-Cycle Learning Loops
Finally, rather than waiting for a final report at Month 12, we should use data to drive continuous improvement. This moves us from annual audits to a “pulse check” approach.
Imagine an ICS commissioning a Virtual Ward service. By reviewing patient feedback themes every four weeks, they might notice patients are struggling with tablet battery life. They can fix this immediately via a hardware change, rather than noting “low patient adherence” in a report six months later.
Scaling innovation is a team effort. To make this framework work, different roles within the health and care ecosystem need to adapt their approach.
Commissioners can help by structuring contracts that allow for “adaptive milestones.” Instead of penalising a change in direction, let’s incentivise the rapid identification of what isn’t working so it can be fixed before resources are wasted.
Evaluators should adopt the role of a “critical friend,” helping the project team identify leading indicators early on. It’s vital that qualitative data—such as patient stories and staff sentiment—is weighted alongside quantitative data during the setup phase.
Innovators, meanwhile, must be transparent about the “messy middle.” If adoption is slower than predicted, showing the evidence of why (e.g., “We found the clinic WiFi was blocking the signal”) builds trust. It frames the problem as a systemic hurdle to be solved together, rather than a commercial failure.
We don’t need to lower the bar for evidence in the health sector; we need to broaden our understanding of what constitutes “value” during the scaling journey. By measuring readiness, engagement, and adaptation, we can turn promising pilots into sustainable solutions that deliver for beneficiaries long-term. FCC can help you get there. If you need support to evaluate, learn and adapt, contact Andy Jones at andy@futurecarecapital.org.uk.