Artificial intelligence & the future of healthcare

As artificial intelligence (AI) applications become integrated into a wide range of clinical tools, the presence of AI is changing the healthcare landscape. Along with offering improved care, it also creates incentives for companies to probe deeply into people’s private spaces, often without consent, and to share that information for profit rather than for care.

As AI infiltrates the healthcare industry, how can we ensure it does no harm?

AI’s potential & overhype: An insider’s view

Jennifer Golbeck, Ph.D.

Jennifer Golbeck, Ph.D., is director of the Social Intelligence Lab and a professor in the College of Information at the University of Maryland in College Park, Maryland. With more than 20 years of AI experience, Golbeck not only studies the most cutting-edge AI developments up close, but she and her team are also creating the types of algorithms making these advancements possible. She gives talks across industry sectors to offer insight on where to look for opportunities as well as potential areas of concern. Golbeck was a featured speaker during the 2024 HealthTrust University Conference in Orlando.

“As a computer scientist, I am not surprised by anything about this technology,” says Golbeck. After all, she and her colleagues have been developing AI for a decade. “From the outside, though, the public introduction of ChatGPT in 2023 was a transformative moment because it is a powerful tool that was put into people’s hands, and it’s easy to use.”

What is AI, anyway?

AI is a computer making a best guess at learning or solving problems based on what it gleans from other examples. “When we try to make a decision as humans, we can’t consider every possibility because there are too many, or they are unknown. AI is kind of a spicy auto-complete,” explains Golbeck. AI—specifically large language models—works to respond to queries by reading millions of documents on the internet and generating text that mimics human-created information.

AI is transformative, but it’s not perfect. For example, anyone who has tried out the software knows that ChatGPT can produce an article, but it needs human effort to make it palatable. “Generative AI is a tool that we can use to make us more efficient,” says Golbeck. She recently wrote a letter to her state representatives and sped up the process by using ChatGPT for brainstorming and rough drafting. “It can give you a first, mediocre take on something and lift some of the burden, but for the analytical, factual parts—the real intellectual work—we still need humans to refine it.”

Natural language processing AI tools like ChatGPT work best for tasks such as writing computer code, reformatting medical journal references and other tasks that don’t require nuance, interpretation or creativity. “ChatGPT is fast and will do a pretty good job for you, but you have to check everything because it makes mistakes all the time,” Golbeck explains.

This is unlikely to change. She says that generative AI will never be 100% accurate because it is based on human data, and humans can be unpredictable. “Having something that’s right 70% of the time is incredible, but that doesn’t mean we’ll get to the point where we will just let it go and it will replace humans.”

AI for diagnostic accuracy & detecting disease

AI is transforming healthcare in many ways, including how medical professionals diagnose, treat and manage diseases.

“We’ve seen impressive advances on diagnoses from AI through medical imaging,” says Golbeck. That doesn’t mean we won’t need radiologists in the future, but rather AI can take some of the burden off them. “We know in certain kinds of imaging, AI can correctly identify a normal scan. But, it doesn’t know what to do with the abnormal scans. So, if you’re told by AI that the scan is normal, and it’s correct 100% of the time, then you might spend a little less time on that.” Just as we’ve seen generative AI organically show up in social media and word processing tools, it’s likely to be built into the imaging software medical professionals are already using.

In January 2024, the Food and Drug Administration (FDA) approved the first medical device powered by AI for detecting skin cancer. In April 2024, the first AI early sepsis detection tool was approved that can calculate the risk of a patient developing sepsis within 24 hours. “There are people working on AI that can detect and predict heart attacks and strokes,” explains Golbeck. She notes that this is of particular interest in the wearable tech space, where even now, an Apple Watch can perform an EKG.

The risky side of AI

A news article recently ran about a woman who was trying to get a diagnosis for her son, which his doctors had been struggling to figure out. She entered his medical records into AI and got a diagnosis, which turned out to be correct.

“That’s a great outcome, but we really don’t want people doing that,” says Golbeck. Those who are amazed by this technology may wonder if they can simply ChatGPT their symptoms to learn what’s going on with their health. Meanwhile, some online pharmacies are asking when it comes to low-risk prescriptions, whether they can simply rely on AI and remove physicians from the process.

“It’s the Wild West right now in healthcare,” explains Golbeck. “The idea of reducing the human [presence in favor of] AI is one that I hope we get past and instead move more toward asking how it can help us be more efficient.”

Privacy in AI is also a real concern. That’s because these tools hold onto and learn from the information we are inputting. “If physicians and other medical professionals use ChatGPT to write notes to patients, they need to be careful to not include patient data because it is likely a HIPAA violation,” says Golbeck. She indicates that there are enterprise versions of generative AI that are private which comply with HIPAA laws.

Case in point: “I have a rescued golden retriever named Chief Brody who is on Prozac to help with his anxiety,” shares Golbeck. After picking up his prescription recently, she received a text from her retail pharmacy that was a marketing message about Mental Health Awareness Month. “It was thrilling and terrifying at the same time. I had no idea that my sensitive prescription data was crossing over into the marketing space, and my trust dropped instantly as soon as that text message came in.”

What healthcare leaders should know

Golbeck suggests that healthcare professionals keep the following in mind while trying to navigate the AI waters: AI will never truly replace humans. “That’s not what we build it for, and it’s not the goal,” Golbeck emphasizes. AI was built to support humans as decision-support tools as opposed to autonomous systems that run themselves. “Think more about how AI can help you gain efficiency.”

AI is inherently biased. Since every AI system is built by and learns from humans, there is bias that ends up in the system—racial, cultural, gender and more. “When you have biased text, you get biased answers,” says Golbeck.

It’s the same with generative AI for images. It’s a very difficult problem, and the healthcare industry must be especially mindful because it carries the risk of making the care that is delivered less equitable.

AI’s capabilities are leveling off. “If you’re an executive making decisions about AI, don’t let yourself be dazzled by the hype,” says Golbeck. “Look for the evidence and look to the skeptics. Yes, AI is going to change work and make us more efficient. But there is no evidence it will make us smarter.”


Read more insights from Jennifer Golbeck, Ph.D., in her books: Analyzing the Social Web; Online Harassment; Introduction to Social Media Investigation: A Hands-on Approach and Computing With Social Trust.

Share This Article:

Share Email
, ,