*This article was originally published in KevinMD! Artificial intelligence in health care: What your patients want to know
Most patients feel it is important to be informed when artificial intelligence (AI) will play a role in their care. Although most anticipate that AI will make health care at least somewhat better in the next five years, many express concerns about misdiagnosis, privacy breaches, unintended consequences of AI, and increased costs.
These mixed feelings highlight the importance of addressing patient concerns and providing clear, accessible information. To help patients better understand, here are straightforward responses to common questions about the role and impact of AI in their health care journey.
What does AI mean in a health care setting?
AI is the use of computer systems to perform tasks that normally require human intelligence, such as decision-making, visual perception, speech recognition, and translation. The field of AI-enabled health care products is broad, ranging from decision support models to surgical robotics, image analysis, detection of abnormal heart rhythms, and clinical note-taking (scribes).
Why do providers use AI?
Could I be misdiagnosed?
AI-enabled products can introduce bias. AI models will perform poorly if used on patients with traits that are unlike the people whose data was used to build the model. The same disease may present differently across diverse patient populations. For example, a heart attack risk model developed from symptoms experienced by men would provide less accurate risk predictions for women. It is important for health care providers to be aware of this risk, to understand the types of people whose data were used to create the model, and to confirm that recommendations made by the model are right for your care. Your health care provider should clearly explain the roles of both human caregivers and AI in your care.
How safe are AI products in health care?
The privacy of patients must be protected. Identifiable details about individual patients should be removed before their health data are included in a dataset used to train AI models. The FDA puts patient safety first, requiring all new AI products to undergo a thorough evaluation before being approved for use in clinical settings. Transparency is also essential—patients should be informed when AI is being used in their care. Furthermore, health care providers should share any potential unintended consequences that might result from using an AI tool.
It’s important to explain to your patients how the AI model’s insights compare to human performance alone. Does using the AI model help make their care more efficient? Get them the right diagnosis sooner? Reduce the cost of care? Patients have the right to see their data and know where and how it is being used—whether for training AI models or just for their own clinical care.
As AI continues to play a larger role in health care, providers play a critical role in building patient trust. By explaining AI’s role in health care and addressing concerns, you can help patients feel informed and confident. Taking the time to educate your patients on AI’s benefits, safety measures, and how it complements your expertise to deliver timely, high-quality care can go a long way.
Cory Hayes and Sailee Bhambere are health care executives. Jeanna Blitz is an anesthesiologist and a physician executive.