Addressing AI bias with an algorithmic 'nutrition label'

Artificial intelligence holds enormous potential for innovation and medical advancement. 

At the same time, experts have warned that it isn’t magic. On the contrary, wielded clumsily, it could actually worsen existing care disparities to a potentially dangerous degree.  

In a HIMSS21 Global Conference Digital Session on Monday, Mayo Clinic Platform President Dr. John Halamka proposed a solution: being transparent about an algorithm’s development and fitness for purpose. 

Halamka spoke with HIMSS Executive Vice President of Media Georgia Galanoudis as part of the afternoon-long session “The Year That Shook the World.” They discussed how AI and machine learning are driving progress in many sectors and whether it’s possible to safeguard AI’s role in the patient’s medical journey while addressing any bias.  

“Your optimism for AI is justified, but there are caveats,” said Halamka. “We need, as a society, to define transparency of communication: to define how we evaluate an algorithm’s fitness for purpose.”  

Halamka compared algorithmic transparency to the readily available information on food packaging. “Shouldn’t we, as a society, demand a nutrition label on our algorithms?” he said.  

So who should be in charge of such a label? In terms of bias or efficacy, Halamka proposed a public-private collaboration of government, academia and industry.

“I think it’s going to happen very soon,” he predicted.  

Halamka said that we’re in what he called a “perfect storm” for innovation when it comes to addressing bias and fairness in AI – and that a consortium would ideally be tasked with tackling the technology required for the kind of transparency at play.  

Transparency will also be key, Halamka said, for ensuring that AI’s momentum is maintained throughout efforts to ensure algorithmic equity. He gave the example of a Mayo Clinic algorithm to help identify low ejection fraction.

“We then did a prospective, randomized, controlled trial … and stratified it by race, ethnicity, age and gender to look at how this algorithm actually performed in the real world,” he explained. They then published the results.   

Looking ahead, Halamka predicted that clinicians will be able to leverage knowledge from wide swathes of patients of the past “to care for the patients of the future.”

AI augmentation of human decision-making can help clinicians overcome the bias shaped by their own individual experiences, he said.  

He outlined what Mayo calls the “four grand challenges”: gathering novel data (and trying to standardize that data), creating discovery, validating algorithms and delivering the end result into workflow.  

“Let us hope government, academia and industry work on those four challenges, and we’ll all be in a better place,” he said. 

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article