Health care organisations wanting to adopt artificial intelligence (AI) tools often lack good information about which ones are safe and effective. We talked to Haider Husain, chief operating officer of Healthinnova, about how a new BSI standard is addressing the problem
"We don’t want to stifle innovation – we want to nurture innovation.” Haider Husain, chair, BSI panel on developing a standard for the use of AI in health care
One of the most exciting developments in health care tech in the past five years has been the increasing use of artificial intelligence (AI) in a wide range of applications – from tools that can rapidly and accurately detect certain diseases in radiography scans to therapeutic chatbots designed to help people with low-level mental health problems.
AI’s huge potential is making it increasingly attractive to health care providers. With a vast range of products to choose from, however, it can be hard for purchasers to assess the quality and safety of any individual piece of AI software.
It was this lack of quality control that inspired Haider Husain to propose that the British Standards Institute (BSI) create a standard on AI in health care. Five years ago, Haider, whose career in IT has included stints at Microsoft and GE Healthcare, started a company called Healthinnova, which offers consultation on adoption and implementation of digital health technologies.
Haider sits on a BSI panel called Healthcare organisation management, and his suggestion for the new standard was eagerly accepted. He was invited to chair the new panel, which has nearly 30 members, and is made up of representatives from the technology industry, the NHS, academia and the fourth sector, including FCC’s head of policy and research, Dr Peter Bloomfield. Care was taken, says Husain, to make the panel “as gender-balanced and as diverse as possible.”
For two years, members met regularly to develop the standard, Validation framework for the use of AI in healthcare, and in early October, a draft was issued for consultation. To ensure input from a wide range of people, four months before the draft was published, the panel carried out a mini-consultation with stakeholders such as academics and suppliers, and they also asked a panel of patients what they would like to see from the standard.
Its aim is to act as a kitemark of quality, so that when purchasers decide to invest in AI software, they can require potential suppliers to conform to the standard. Any supplier who wanted BSI accreditation would have to answer questions about their product from an independent auditor. “It is a bit of a wild west at the moment,” says Husain. “We don’t want to stifle innovation – we want to nurture innovation.”
The standard covers all aspects of creating a piece of AI software: inception, development, validation, deployment and monitoring, as well as human factors and ergonomics. Under the inception heading, for example, the supplier must show that there is a health care need for the tool, while under validation, suppliers are expected to show that the clinical effectiveness of the tool is supported by research evidence.
One element that was particularly important to include was a requirement to guard against bias. It’s recently become clear that some health care devices have an in-built, if unintentional, bias against certain groups: pulse oximeters, for example, have been found to work less well on black skin. AI software that has been developed using data based on homogeneous groups can inadvertently introduce bias against other groups. “You could show documentation that it’s clinically effective because you’ve tested it against 400 people, but they just so happen to be all white, so you need that extra bit of information: who did you test against? what was your process?” Husain says.
The standard also includes a requirement for suppliers to document the carbon impact of their product, something that was the subject of much discussion amongst panel members, but which Husain and other panel members, including Peter Bloomfield, FCC’s representative, felt was important.
Adoption of the standard by industry will rely on organisations buying AI products, such as NHS trusts and private providers, including it as a requirement in their procurement rules, Husain says. Initially, he expects that some AI suppliers might regard the new standard as another unnecessary hurdle to climb, but he thinks in time companies will see the benefits and it will be widely adopted: “Once it’s embedded, it will help suppliers put the evidence in a structure that then is repeatable. Once you’ve collated that evidence, if you’re doing it as you’re building it, then you don’t have to do it again.”
The draft standard is currently out for consultation. Next stages are to carry out comment resolution in December and January and then publish a final draft in February 2023. If you’d like to comment on the draft, you can read it here. The closing date for comments is 5 December.
FCC is pleased to have been involved in the production of this new standard on AI in health care. AI technology is set to have a transformative impact on health care, but without a means of measuring the effectiveness, safety and rigour of a particular product, there is a real risk that health care providers will adopt technologies that are at best ineffective or at worst harmful to patients. The standard is designed be achievable for maturing companies and enable quality without stifling innovation. We encourage people to comment on the draft standard, and hope that it will be widely adopted across the industry.