This blog is an excerpt from episode three of our Everyday AI podcast.
Nursing student Lauren Rebecca wears her smart watch every day. She has a tan line from it. She also receives notifications from it.
Lots of notifications.
“I had all the settings on mute just because of all the notifications with everything. It was one less thing that I needed to be notified of,” Lauren said.
Then something unusual happened: Lauren started experiencing bizarre symptoms like heat sensitivity. She checked out her watch and saw a backlog of heart rate notifications.
“I saw that my cardiac output and heart rate had been all over the place. There was a massive dip that was in the space of a couple of days, showing the trend had gone from being normal for the last three years to extremely low and abnormal,” she said.
Because of her watch, Lauren made an appointment with her GP. After some tests it became clear that half Lauren’s thyroid had disintegrated. The timing aligned exactly with when her watch started sending alerts.
From smart watches to chatbots, artificial intelligence (AI) use in healthcare has been steadily growing. And it’s making a huge difference.
One in seven Australian women will be diagnosed with breast cancer.
Dr Helen Fraser is a radiologist, breast cancer clinician and AI-researcher with more than 20 years clinical experience in breast screening, imaging and cancer diagnosis. She is currently the Director of the Research Hub for Language in Forensic Evidence at the University of Melbourne.
“Every mammogram is read independently by two breast imaging subspecialty trained radiologists. And if they differ, another read takes a third arbitration read. So, it’s a really time consuming, and quite a costly, process,” Helen said.
The majority – about 95 per cent – of mammograms read are normal. That means most specialists’ time is spent looking at perfectly healthy mammograms.
Helen is working with AI to change that.
“The promise of AI and breast cancer screening is, in fact, rather than being a radiologist that looks at 95 per cent normal scans, we can use our human creativity and skillsets to spend more time with the women in those situations with complex cases. Or in biopsy procedures, for instance, where we really can add a benefit,” Helen said.
Helen’s team have trialled replacing the second radiologist with an AI algorithm trained on historical data. Not only did their trial improve accuracy by 20 per cent, but Helen is also confident it will improve patient experience.
“Coming into assessment is a really anxiety provoking and costly process. Many women really feel they have cancer, even though the majority don’t,” Helen said.
“Our service delivery for time to results of a mammogram is two weeks. Two weeks is a long time to wait for a normal result. And I think an algorithmic reader could help us reduce that down to a matter of days.”
Artificial intelligence (AI) bias
AI use in many industries has come under criticism for perpetuating human biases. And the health industry can run the same risks. Helen’s team is very cognisant of the potential for bias encoded in their datasets.
The medical imaging dataset used in Helen’s work has detailed demographic data, enabling them to test bias in many ways. For example, to see if the aggregate performs equally as well for women in minority groups.
“We haven’t solved the problem, but I think we’ve started that journey of being cognisant of it,” Helen said.
If implemented in a responsible way, it’s exciting to imagine just how much this technology will advance into the future. It can help take the load off health professionals and give us more power in monitoring our own health.
As we develop legislation, public education and regulation around AI, it will continue to become safer and more ethical. But ultimately, AI will always be better when used as a tool in collaboration with doctors and professionals.