In this digital era, artificial intelligence (AI) is lending an artificial hand to many fields. Now we’re finding applications of artificial intelligence in medical imaging.
Think about the last time you encountered medical imaging. Maybe you remember the magical moment you saw your child for the first time through an ultrasound. Or maybe you needed an X-ray after falling off your bike.
Medical imaging is a cornerstone of healthcare. We use it to monitor health, diagnose conditions, inform treatment and observe disease progression.
At the Australian e-Health Research Centre (AEHRC), we’re developing tools to facilitate medical imaging. Our aim is to develop technology that enables the digital transformation of healthcare to improve services and clinical treatment for Australians.
Artificial intelligence for medical imaging
Medical imaging uses advanced technologies to visualise internal parts of the body. These include X-rays, magnetic resonance imaging (MRI) scans, computed tomography (CT) scans, positron emission tomography (PET) scans, ultrasounds and more.
The images produced are highly complex, so they are often time-consuming and difficult to interpret.
At the AEHRC, our researchers are hard at work developing tools that facilitate image interpretation. They’re creating AI tools for other purposes too, such as improving tissue visualisation to allow more precise treatment.
We’re also improving AI technology to enable further applications in medical imaging. We hope to develop software that helps health professionals diagnose, treat and monitor the progression of disease more confidently.
Dr Aaron Nicolson, an AI and machine learning researcher at the AEHRC, explains that some members of the team work closely with health professionals to understand what tools would be useful.
“Together, you identify a problem that needs solving. Maybe you want to diagnose COVID-19 from chest X-rays, for instance,” Aaron said.
“Then you need data. It could be a publicly available dataset, or it could come from a clinical partner.”
Researchers use the data to train machine learning models, a form of AI. The result is a tool that can make decisions and predictions about new inputs based on what it has learnt.
If a tool initially proves to be capable, it then undergoes extensive validation and trialling.
AI in action
We’ve developed and validated a tool that produces CT information from MRI scans. MRI scans visualise soft tissue better than CT scans. However, CT scans are geometrically accurate and provide other essential information. Our tool combines the power of both techniques to improve prostate cancer treatment.
Health professionals are already using this tool in the treatment of 65 men with prostate cancer. The tool allows comprehensive, highly accurate information about the position of tissues to be generated with only one scanning session. This means more precise treatment, less damage to nearby healthy tissue and reduced side effects.
We’re also working on software that generates reports from medical images. The tools can assist health professionals and researchers in extracting and using information from the images. One of the tools we’ve developed is CapAIBL.
Alzheimer’s disease biomarkers are often present in the brain years before clinical symptoms appear. We can now visualise these using PET scans. But quantifying the biomarkers from the images is difficult and time-consuming.
CapAIBL analyses PET scans of the brain and generates a comprehensive, easily interpretable report on the presence of Alzheimer’s disease biomarkers. The tool allows health professionals to provide a more confident diagnosis or risk prediction of Alzheimer’s earlier.
The next generation
Our researchers are continuing to find innovative applications for AI in medical imaging.
“At the moment we’re working on generating chest X-rays guided by written prompts. We’re trying to improve the technology to reach a point where it can perform accurately and reliably,” Aaron said.
Medical image generation is also solving a problem in AI: the need for data. Artificial images can help overcome medical data scarcity and reduce privacy concerns surrounding the use of patient data.
Dr Filip Rusak, an AEHRC research scientist, and his team are generating medical images for another purpose.
They used AI to produce a set of synthetic MRI scans of brains with changes in cortical thickness, a sign of neurodegeneration. They could set the amount and location of cortical atrophy.
The images provided a baseline they could test the sensitivity of different cortical thickness quantification methods against.
“Cortical atrophy can start up to ten years before clinical symptoms of neurodegenerative disorders, such as Alzheimer’s, appear. Extremely sensitive methods are needed to observe these signs early,” Filip said.
“Our findings can help clinicians and scientists choose the right tool for the job and more accurately identify brain changes due to Alzheimer’s.”
Picturing the future
Aaron expects AI technology to continue improving, which will enable further applications in medical imaging and precision health.
We’re currently working to develop software that can process information from more inputs. These multimodal AI tools would allow other medical information about the patient to be considered when making decisions and predicting outcomes.
But don’t worry, the technology isn’t designed to replace humans. It’s intended to assist with a complex and time-consuming processes. This allows health professionals to make confident decisions and provide informed care.
Computers won’t be replacing clinicians any time soon.
18th March 2023 at 9:15 am
This is so cool. A long, long time ago I did a summer project at RMIT where I was using data mining and genetic programming to perform fungal infection detection on FMRI scans of lung tissue in patients with lung disease. The initial results were promising. Then the funding dried up. And now I have Alzheimer’s. But mine was discovered by humans.