top of page

How to overcome AI’s healthcare headwinds

April 4, 2021 at 11:00:00 PM

AI has already shown tremendous promise for everything from automating patient communication and non-clinical administrative tasks to reducing physician burnout. But realizing that promise more broadly requires AI developers to establish trust with their healthcare users.

Source:

MedCityNews

How to overcome AI’s healthcare headwinds

Author:

PUNIT SONI

The bloom may be off the rose when it comes to the use of AI in healthcare, as multiple missteps have cast doubt on whether the technology can deliver real change for the industry. IBM’s recent decision to spin off Watson Health demonstrates just how difficult it can be to apply AI to some of healthcare’s trickiest challenges–and how easy it is to create cynicism among would-be users. AI efforts also have been tainted by biased datasets, creating tools that perpetuate inequality. Even some useful AI tools, like hospice software that helps caregivers maximize facetime with their sickest patients, have caused frustration and confusion when recommendations seem to come from a black box of technological machinations with little context.

Do these black marks signal a coming death knell for AI in healthcare? They shouldn’t. AI has already shown tremendous promise for everything from automating patient communication and non-clinical administrative tasks to reducing physician burnout. But realizing that promise more broadly requires AI developers to establish trust with their healthcare users.

How do we do this? There are three key steps technologists need to take.

First and foremost, prioritize transparency. Health providers will be far less hesitant to adopt AI if they are confident in the results, so ensuring that the outputs are explainable—especially in the content of clinical decision-making—is vital for both reducing the clinician’s frustration and providing confidence to patients. When providers can understand why an AI tool is making a certain recommendation—even if they don’t understand the underlying algorithm—that tool can then be used as part of a holistic decision-making process; a recommendation without a rationale is just a headache.

Transparency also entails a clear explanation of the data sources used in training the system to address questions about bias and the treatment of private patient information. Having these conversations upfront ensures a better fit between AI solutions and health systems.

Second, iterate your way forward—and bring the customer along with you. As we’ve seen with autonomous vehicles and voice technology, AI isn’t yet ready to fully function without the guidance of humans. After all, today a Tesla still wants the driver to put their hands on the wheel every so often when “self-driving,” and natural language processing tech hasn’t reached the Holy Grail of parsing out the relevant parts of a conversation by ambient listening. Meanwhile, providers are not ready to place their full faith in machine learning to handle every task; even if they could, most are not ready to take their hands off the wheel. Often, starting small and adding functionality to an AI solution is the best way to gain trust. By making smaller promises and being forthright about the state of technology, AI tools can gain traction in the market.

Finally, learn from the customer. Iterative improvement allows AI to realize its loftier ambitions, but this requires deliberate collaboration with and feedback from healthcare providers. Clinicians need assistive technology that complements their workflow, and AI tools will be unable to meet the needs of the healthcare field if we don’t constantly seek out and implement the feedback of real-world users. As users see their feedback reflected in future iterations of the product, their trust grows.

When these principles are combined with a thoughtful, outcome-oriented health system that understands the mechanics of AI and what it takes to scale this technology, developers can tack into the wind and keep healthcare technology moving forward.

Read full article:

bottom of page