AI in healthcare: Big ethical questions still need answers

ORLANDO – Seemingly overnight, artificial intelligence has found its way into every corner of healthcare, from patient-facing chatbots to imaging interpretation to advanced analytics applications.

With that sea change comes a host of ethical questions about how, where and to what extent AI and machine learning apps should be deployed. Most of them are still unanswered.

At HIMSS19 on Tuesday, a panel of healthcare and technology experts assessed this new landscape, taking stock of the big opportunities that AI can enable – while also exploring some of the “bright lines that we don’t want to cross,” as Microsoft Associate General Counsel Hemant Pathak put it.

AI has already done wonders for healthcare. “Even if we never move beyond the current state of the art, we have a decade of application and value to extract” from existing AI-derived datasets, said Microsoft Corporate Vice President Peter Lee.

Still, it’s remarkable how fast that state of the art has matured – and continues to – establishing itself as the new normal in healthcare and beyond, he said.

“We are really still evolving very quickly in terms of core technology,” said Lee.

For example, just decade ago, Lee was working as a researcher and computer scientist at DARPA, and in 2009, there was Department of Defense policy against investing in facial recognition technology, he said. The DoD was wary about misuses and intended consequences.

Now, millions of consumers are completely comfortable using their iPhone X with those same capabilities, or uploading their own faces to Google’s Arts & Culture app.

“The evolution of thinking from that point to now – the technology is just assumed,” said Lee.

Still, he said, it’s time for a “more nuanced and thoughtful conversation” about AI technologies such as those.

At Microsoft, for instance, Pathak said there’s an in-company institutional review board that weighs its approach to development of facial recognition technology. And this past December, Microsoft shared a long blog post where it said it was “time for action” on that particular strain of AI, and called for “governments in 2019 to start adopting laws to regulate this technology.”

In healthcare, there are myriad other AI applications, and for all their huge potential, they still need to be closely monitored, said Susannah Rose, associate chief experience officer at Cleveland Clinic.

“It’s not just how AI is diffused in the [healthcare] system; it’s the structure of how we’ll be testing it,” she said. With machine learning applications, it’s critical that “we not abandon the notions of rigorous testing that we have in healthcare today,” she added. “I don’t think AI can be any exception to that sort of rigorous involvement.”

As the technology continues to evolve almost daily, there are already immense benefits to the consumer (“chatbots are becoming more socially aware,” Lee pointed out) and to providers and patients in care settings.

And there’s a real opportunity for AI to continue to improve the healthcare experience, said Rose, to “keep what needs to be human, human – and then come back in and automate those things that don’t need the human touch.”

Still, there are perils. “Even small defects in the training samples can cause unpredictable failures,” said Lee. “Understanding blind spots and bias in the models” is a must-have for safe integration of AI into clinical workflows.

Big picture, there are lots of big questions still to be ironed out as AI works itself into every aspect of our lives, said Sylvia Trujillo, senior Washington counsel at the American Medical Association. These include questions of security and privacy, of course, or the potential between patient rights versus public health when dealing with certain datasets.

AMA, for example, has already established a set of “fundamental principles” about its position on AI in healthcare, she pointed out – adding that there will be further policies coming in June. 

“We have to have a discussion,” said Trujillo, “around making the data available and setting up a structure around consent.”

As more and more people willingly submit their own genetic and genomic information to direct-to-consumer companies, after all, few might be aware of the potential for discrimination – whether for long-term care or life insurance – on the basis of that data, she said.

But it’s also true that, “given the future trends in healthcare and demographics, we cannot advance healthcare without AI innovation,” Trujillo added. “So this is a conversation we need to have.”

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Source: Read Full Article