latest
Practitioners across the health system value robust evidence. But evaluation that adds friction but no insight — forms without feedback, metrics that miss the point, and reports that never reach decision-makers — can wear them down. Poorly designed evaluation delays progress.
To drive adoption, evaluation needs to become a light-touch engine for learning and trust — clarifying impact, supporting commissioning, and creating the confidence to scale.
Three recurring patterns show when evaluation isn’t working:
The solution is to redesign evaluation – so it supports decision-making, minimises burden, and clearly maps the routes to adoption. Practical steps include:
Design for impact from day one. Start with shared outcomes linked to system priorities — productivity, access, quality, equity. Build an explicit theory of change, agree SMART indicators, and limit data capture to what truly informs decisions. Define baselines and a counterfactual. (Pre-registering planshelps ensure transparency and stakeholder alignment from the start. As the AHSN guidance puts it, “the outcomes you perceive as important may not always resonate with your potential NHS customers.” Early definition avoids that mismatch.
Make data useful and light-touch. Integrate measurement into existing workflows. Where possible, automate extraction through Electronic Health Records (EHRs) and shared data platforms. Combine quantitative evidence with qualitative insight — staff, patient, and system perspectives — as recommended by the Health Innovation Network. Check implementation fidelity, track subgroup and equity effects, and maintain a visible budget-impact line. Regular review cycles with simple, shared dashboards turn data into collective intelligence instead of a compliance exercise.
Build trust and adoption pathways. Co-produce evaluation frameworks with clinicians, patients, managers and ICB leads. Use independent evaluators to safeguard objectivity. Embed information governance “by design” so evidence packages are ready for scrutiny. Translate results into a commissioning-ready bundle — business case, SOPs, cost model, and a scale plan aligned to Accelerated Access Collaborativeprinciples.
Standardise what you can. Use consistent data definitions and align with recognised frameworks such as NICE ESF or local AHSN templates. Comparable data reduces duplication and builds confidence among decision-makers reviewing multiple pilots.
Plan replication early. Document not just results, but context — settings, resources, dependencies and enablers. A clear blueprint helps other sites reproduce outcomes with sensible local adaptation. This transparency transforms isolated success into scalable learning.
Upskill the frontline. Create local evaluation champions who understand both operational flow and evidence needs. Provide short, focused training in impact measurement and data use. When staff see the loop between their data and real change, engagement and data quality improve.
Publish the whole truth. Open publication of null and negative findings builds credibility. The NIHR policy on dissemination makes this explicit: sharing all results, not only the positive ones, strengthens system learning and avoids repetition of avoidable mistakes.
Fewer, better evaluation measures avoid duplication and surface true performance signals. When data capture aligns with professional judgement, buy-in rises and conversations move from defending activity to understanding impact. That clarity translates into stronger decisions: transparent evidence speeds commissioner approval, supports investment, and reduces risk. Packaging results into a commissioning-ready evidence pack then helps innovations move through assurance with fewer delays.
And as evidence becomes comparable across sites, continuous review shows what works, for whom, and at what cost. This shared visibility supports measurable productivity gains and fairer allocation of resources. Clear outcomes and common definitions make procurement cleaner: commissioners can specify requirements precisely, suppliers can respond against the same yardsticks, and post-award assurance becomes simpler.
Rapid-cycle learning also closes the gap between pilot insight and operational benefit. Each iteration feeds the next, shortening time-to-value and building a repeatable route from pilot to programme. With consistent evaluation standards in place, organisations can grow a portfolio of investable, spread-ready innovations – each with the evidence and system fit required for scale.
Evaluation fatigue isn’t inevitable. When designed with clarity and proportion, evaluation becomes a strategic tool: lightweight enough to fit daily practice, rigorous enough to earn trust, and transparent enough to drive scale.
Better evaluation doesn’t slow innovation — it’s what allows it to spread. If you’re planning or delivering an innovation programme, we’d love to explore how an evaluation like this could help you learn and adapt. Get in touch with Andy Jones at andy@futurecarecapital.org.uk.
Andy Jones is Evaluation Lead at Future Care Capital.