latest
Radiologists generally diagnose dementia from brain scans. The diagnosis relies on subjective judgement, however, and is prone to error. Hugh Pemberton of the UCL Queen Square Institute of Neurology talked to Kim Thomas about how AI tools can help with faster, more accurate diagnosis
"Quantitative volumetric reporting tools take an MRI scan and automatically process it through several steps to draw out specific quantities. For instance, they will split the brain up into lots of different parts, and then they will run volumetry on it, to find out the volume ¬– millimetres cubed – of that part of the brain. So you have a number and then you can compare this number to a database of thousands of healthy people." Hugh Pemberton, research fellow at University College London’s Queen Square Institute of Neurology
Dementia can be hard to diagnose. Typically a GP will refer a patient to a specialist who will carry out a memory assessment, often followed by an MRI scan, which can reveal atrophy to parts of the brain that may indicate dementia. An MRI can also rule out other causes of a patient’s problems, such as strokes or brain tumours.
Analysing an MRI scan for signs of dementia is not a precise science, however. A radiologist uses their skill and experience to visually assess a scan and make a subjective judgement.
Yet developments in artificial intelligence and quantitative volumetric reporting offer a way of bringing more objectivity to the assessment of a scan. Quantitative volumetric reporting tools, or QReports, measure the volume of individual parts of a patient’s brain and compare them to healthy populations of the same age, or compare the patient’s brain changes across two MRI scans, where available. These provide “a valuable tool for increasing objectivity in dementia assessments,” says Hugh Pemberton, a research fellow at University College London’s Queen Square Institute of Neurology.
Pemberton explains: “These quantitative volumetric reporting tools take an MRI scan and automatically process it through several steps to draw out specific quantities. For instance, they will split the brain up into lots of different parts, and then they will run volumetry on it, to find out the volume – millimetres cubed – of that part of the brain. So you have a number and then you can compare this number to a database of thousands of healthy people. And then you match them with their age and sex, and you can say, ‘This person is x percentile compared to a healthy population of the same age and sex.’”
These tools are not used on their own – they always supplement the radiologist’s own visual rating, Pemberton says. In some forms of dementia there are distinct patterns of atrophy, he adds: “Alzheimer’s has particular brain shrinkage in the medial temporal lobes at first, whereas frontotemporal dementia can also involve shrinkage in the frontal lobes and the temporal lobes, and posterior cortical atrophy usually begins in the posterior parts of the brain.”
A neuroradiologist who visually assesses an image will spot those signs of atrophy, but a quantitative volumetric reporting tool can highlight the pattern of atrophy very quickly, priming the radiologist to look at the scan and see if their assessment is in line with the reporting tool. They’re designed, says Pemberton, to “improve diagnostic accuracy, and improve diagnostic confidence.” Many dementia patients “go through a long process and can face a great deal of diagnostic uncertainty before they have a more certain diagnosis.”
That wait for a definitive answer can be as long as three years. Adoption of these tools could mean that patients receive a “faster, and more accurate diagnosis, so it could mean fewer repeat visits to different specialists,” enabling them to access help earlier.
How do neuroradiologists know which are the most effective QReports for the job? An apparent lack of research on the different tools available prompted Pemberton and his colleagues to conduct a systematic review, published in Neuroradiology at the end of last year, which examined the published evidence for the 17 commercial and regulatory-approved QReports.
The tools themselves all used databases of healthy populations for a comparison point, with the size of each database ranging from 100 people to about 8,000, with a midpoint of 2,000-3,000. The most effective tools will be those drawing on a large population that is diverse in terms of age, sex and ethnicity. Tools designed for a certain ethnicity may not translate to other ethnicities.
Of the 17 tools identified, 11 companies had published some form of technical validation on their methods, but only four had published clinical validation of their QReports in a dementia population. There is a “significant evidence gap,” the review concluded, “regarding clinical validation, workflow integration and in-use evaluation of these tools in dementia MRI diagnosis.”
As a PhD student, Pemberton and the team at UCL created a QReport and conducted a clinical accuracy study on it using nine radiologists at three different experience levels. The tool led to an “improvement in the diagnostic accuracy, and diagnostic sensitivity for picking up Alzheimer’s disease vs healthy controls,” he says, but it also led to an increase in the diagnostic confidence – in this case, false confidence – when junior raters incorrectly assessed a patient’s brain. Pemberton thinks this may be because the more junior raters were relying too heavily on the tool rather than on their visual assessment.
As his review shows, there is a need for a more thorough assessment of these tools so that clinicians can make informed decisions when purchasing commercial QReports. There has been an “explosion” in the availability of AI tools for assisting diagnosis of various conditions with medical imaging. There is, therefore, a need for good quality research, and his Neuroradiology paper, Pemberton says, was a “call to arms” to the companies involved to rigorously validate their products.
What is needed now, he says, is for a hospital or group of hospitals who have been using QReports for an extended period to carry out a “proper health economics assessment” that would look at “how fast all these patients receive the diagnosis, whether it reduced the number of visits and diagnostic tests required, how much money it has saved for this particular hospital and how many more cases this radiologist has been able to assess in this time period with the benefit of this report and its clinical accuracy.”