Long Read: Perspectives on Regulating and Legislating Software Innovation post-Brexit

27th October 2020 about a 9 minute read

John F Kalafut PhD – Chief Digital and Informatics Scientist, GE Healthcare.

As the Medicines and Medical Devices Bill continues to make its way through Parliament, John Kalafut explores the vitally important role of regulation in safeguarding patients as healthcare systems around the world become more and more dependent upon data-driven medical devices. GE Healthcare is an industry partner involved in the work of the National Consortium of Intelligent Medical Imaging whom FCC has been working with over recent months to improve the distribution of value from health data partnerships in the UK.

If one were to believe the zeitgeist and some of the breathless coverage in trade and popular press, “AI” is driving the delivery and optimization of patient care. “Computers can read eye scans better than ophthalmologists! AI detects Sepsis! Machines detect cancer faster and more accurately than human radiologists! Smart phones can tell you your risk of a heart-attack! Will radiology and pathology exist in 20 years?”. Undoubtedly, computational methods using machine learning techniques hold extraordinary potential to address acute challenges facing healthcare systems around the world – access, quality, precision and doing more with less. Realistically, the diffusion of any technology into healthcare lags that of consumer and other technology sectors – namely, because of the need to ensure the new methods are clinically useful, safe and provide tangible value greater than an existing technology (diagnostic, therapy or monitoring).

Providers, administrators and patients all require and correctly demand transparency of clinical evidence, fair pricing, equitable access to valuable health technology and assurance that the software – either as standalone applications or embedded within devices/systems – is safe and that any potential harms are well understood by their physicians and caretakers. National governments, by nature of their regulatory oversight mission, are looked to as the guardian of patient safety by legislating and enforcing regulatory frameworks to effectively manage the risk-benefit of new medical technologies. You may ask, though, aren’t those frameworks too rigid, old-fashioned and not applicable in the brave-new-world of Artificial Intelligence? We use our smartphones for so many daily tasks, why can’t I as a healthcare professional access and use new apps? Why should governments try and intervene, clearly Apple or Google are much nimbler and more efficient than any government agency.

These questions are of relevance to U.K. citizens as Brexit looms and various regulatory policies that were informed by EU-level frameworks and guidelines are transposed into U.K. law. Related to the development and diffusion of AI in medical technology, the Medicines and Medical Devices Bill currently making its way through parliament proposes the regulation of software systems. I applaud the intent to catalyze innovation and development pace, but it is important to balance these with appropriate safe-guards and risk-based paradigms. To understand whether or how healthcare AI should be managed differently, it is important to understand the definition of medical devices, software in a medical device, software as a medical device and how software is regulated.

Per international standards (IEC 80001-2-2, ed. 1.0 (2012-07)), the definition (emphasis mine) of a medical device is:

Medical device: any instrument, apparatus, implement, machine, appliance, implant, in vitro reagent or calibrator, software, material or other similar or related article:

  1. a) intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the specific purpose(s) of:

– diagnosis, prevention, monitoring, treatment or alleviation of disease,

– diagnosis, monitoring, treatment, alleviation of or compensation for an injury,

– investigation, replacement, modification, or support of the anatomy or of a physiological process, […]

– providing information for medical or diagnostic purposes by means of in vitro examination of specimens derived from the human body; and


So, regardless of the technological means – software, circuitry, or chemistry – if the solution performs any of the definition topics above, it is a medical device. This includes a software application that runs by itself, like a web or smartphone app – not necessarily embedded in a hardware-based offering. A definition from the International Medical Device Regulators Forum (IMDRF) definition: “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.”

It may seem difficult to ascertain what is a medical device software, when should it be regulated and should it be considered an “IT” app or software. The utility of regulatory frameworks is that they help regulators, policy-makers and innovators determine how to manage the risk inherent in medical device software realizing ever-changing technologies. Medical device development standards have been internationally recognized and adhered to by thousands of manufacturers since the 1970s and the principles have been codified into statutory regulations by health departments around the world.

Risk is defined as the product of the occurrence probability and the severity of potential harm arising from a defect. When developing a software tool to be used in a healthcare context, both ethical engineering practice and regulatory frameworks make you assess and conceptualize any and all risks that are associated with your medical innovation. What is the potential for harm of, say, a mobile app used by a healthcare professional? Let’s use an AI-assisted, cardiology risk app as an example.

The cardiology app is intended to be used by a patient or group of patients and their general practitioner to identify and manage the most ‘at-risk’ patients. The mobile app will use heart-rate information and movement activity collected by their smartphone’s electronics and software. Additional information used by the app’s algorithm include the patient’s medical history, dietary information – data input to the smartphone via the cardiac app or another app. Furthermore, groups of patients in that GP’s practice will be monitored and classified into different ‘at risk’ groups so she can quickly review the health status and cardiovascular risk of her patients and to get notifications of patients that might have a life-threatening event in the next 6 hours. This sounds like a very useful and potentially life-saving bit of software that is only made possible by our amazing telecommunications, software, computer algorithm know-how, and wide-spread use of smartphone technology!

As mentioned, regulatory frameworks and legislation exist to help define and guide medical technology development and deployment and are fundamental to protecting the health and safety of citizens by making manufacturers systematically apply the appropriate amount of rigour and evaluation while developing medical products. Just because a medical product is  software, because it is an “app” or has “AI” embedded, doesn’t change the reality that if there is a potential for harm which can arise from the use of the software while acting in the diagnosis or treatment of diseases, it is still a medical device and should be designed as such.

Using my cardiovascular management “app” example, you may see the utility of clear regulatory statutes and policy. A developer of the cardiovascular app might think,” well, we are only giving suggestions to the doctor which patients are unhealthy or at risk. We are just helping them to decide that they would do if they were watching the patient in their clinic, so therefore my app shouldn’t be burdened by all these medical device regulations, testing and documentation requirements”.

How should the developer arrive at a more informed decision? Refer to the definitions of medical devices and medical device software referenced earlier. Is the device going to function so as to be used in the “…diagnosis, prevention, monitoring, treatment or alleviation of disease”?  From the capabilities and features of the app I outlined, the answer would be yes! Is it a software-only device? Most likely, yes, although some sensing technology would be implicated in the design, i.e. technology embedded in a smart watch to record patient’s electrical, heart rhythms. A medical technology developer would consider the potential harms and thus risk of the medical product.

What are some examples of errors and the ramifications to the users of flaws in the app?

  • What could happen if a design defect in the software leads to a miscalculation of the cardiovascular risk for patients within a certain demographic or racial subgroup? Impending heart-attacks or other issues may not be flagged by the app and communicated to the physician watching their group of patients, and that patient or group of patients may be lulled into a false sense of ‘security’ that they are being monitored and they put themselves into an unknown higher risk situation of having their symptoms and conditions ‘missed’ prior to a life-threatening or ending event.
  • What if the AI algorithm making the analysis of the patient data trained on too few example data sets and therefore doesn’t work well for 30% of the time? Patients are at higher risk because the monitoring physicians assume their patients are protected more than they are.
  • Consider the impacts of poor user interface design. What if a poorly designed user-interface on the app systematically causes patients to enter the wrong type of data into a data-entry field on the app and thus ‘confuses’ the algorithm? Likewise, a confusing user interface presents data in a confusing way to the GP and she ‘mis’ reads the status of her at-risk population. These could all have bad consequences of the users of the app.

Without the proper regulatory ‘guard rails’ and guidance from health-agencies, it is conceivable that a product could be released via an “app store” without the developing engineers systematically thinking, addressing, testing and documenting how their technology addresses those concerns. The ‘app store’ manufacturer is not responsible for performing validation of medical technologies, although the general population may think so. The end-user physician may think that the application is robust and well-designed, but without appropriate documentation, registration and publicly accessible data, she would have no easy way of determining the safety and efficacy of the AI-enabled application.

For these reasons, therefore, it is imperative that the UK parliament ensures that regulatory rigour and safeguards are in-place to help ensure the safety of UK citizens. It is not intended for risk-based, regulatory frameworks to hinder or unnecessarily slow-down healthcare innovation. It is a false choice to think one can’t have safe, regulated and innovative medical device software.

Would you feel comfortable – when we are able to routinely fly again – knowing that the software controlling the auto-pilot in your airliner cruising around a thunderstorm at 12,000 meters wasn’t designed, tested and deployed with rigour in well-defined scenarios? You deserve to be assured that medical technology used in life/death decisions also has been developed with similar safeguards.