Pick one: non-human AI – or a real person?

by Carolyn Thomas     ♥    Heart Sisters (on Blue Sky)

In 2008, I joined INSPIRE’s WomenHeart Online Support Community for Women with Heart Disease.  In those early days as a new heart patient, I was fascinated by compelling narratives from other patients with lived experience. Our group was both a safe place to vent and an opportunity to lend comfort to others during scary setbacks. It wasn’t long before I made browsing group discussions a regular habit.  And one day I asked our group members  this basic question: “Has anybody here been misdiagnosed in mid-heart attack?”   

Within minutes of logging into my new support community, I received countless responses. I was gobsmacked to learn how many other women shared my own frightening experience of having serious cardiac symptoms dismissed. I did not yet know back then that cardiac researchers were already publishing evidence of a pervasive (and sadly, ongoing) gender gap in research, diagnostics, treatments and outcomes between male and female heart patients.(1)   

What I did know then, however, was that the female members in my WomenHeart support group were real heart patients. Keep in mind that I’d discovered the wonderful world of online patient support groups over 17 years ago, (during the goood old days when we knew that the persons responding to our plea for understanding were actual living-breathing-human-beings like the rest of us). This was long before the Silicon Valley tech-bro-explosion of AI (Artificial Intelligence) had made its creepy way into patient support groups.

One support group member named Lisie raised her own concerns to one of the  INSPIRE support community site administrators about the use of AI systems for responding to patient sharing. 

Here’s what Lisie wrote to express those concerns: 

“Dear INSPIRE Admins,

“AI responses are not useful for a site where community members are seeking engagement, understanding, and personal experience and recommendations with other community members –  in other words, REAL PEOPLE. 

“I personally don’t want to read AI responses when the point is to talk to and get support from others who have or had dealt with similar situations and worries.”

I’m with you 100%, Lisie. 

For example, I wondered why the Canadian Breast Cancer Network (CBCN, a non-profit organization) seems so gung-ho on their patient members communicating with AI instead of real people – justifying this “because AI can help you understand terms like “HER2-positive” or “neoadjuvant therapy”

As it happens, my breast cancer diagnosis this past spring was also a “HER2-positive” tumor, which I understood in mere seconds by just asking Dr. Google. Same with the term “neoadjuvant therapy”. Dr. Google told me that this word means the treatment I was given before my mastectomy.   Both examples are simple definitions –  not machine consciousness. 

A computer screen, an internet connection, and a Mayo Clinic link can go a long and easy-peasey way – and without any of the niggly accuracy issues of current AI. For example, as the New York Times  reported, AI models can sometimes give “dangerously incorrect advice, like suggesting fluid intake for a patient already experiencing fluid buildup.”   

Whaaaaat?! 

Responses from other female heart patients on the WomenHeart site were unfailingly genuine and thoughtful. Even though we’d never met, these women were walking the talk. I learned to recognize certain names, and I looked forward to messages from my “favourites” arriving in my online inbox. There was something reassuring about realizing“We are NOT alone!”  

Even AI fans like the CBCN cite current research findings published in Nature showing that general-purpose AI models achieve only 52%  accuracy.  That’s approximately as accurate as flipping a coin. 

“Because AI models can can act like they are fluent in language, they can erroneously be confused with humans. Consciousness is closely linked with language. Language enables humans to articulate and report their feelings and sensations. Kuhn, 2024).

But AI technology does neither. 

And unlike the human-to-human connections which Lisie (and I) are so fond of,  I neither want nor expect empathy from a machine, because machines DO NOT HAVE EMPATHY. 

Yet that’s precisely the kind of annoying hype I heard non-stop while attending the 2015 Medicine X conference at Stanford University – where every  computer science under-grad seems to own both a Tesla and at least one start-up company that’s going to change health care as we know it!”  The trouble was that so many of those students bragging about the brilliant health care disruption they were planning often left the unfortunate impression during our encounters that they’d much rather tell patients what we want or need than ask actual patients in the first place. 

Artificial Intelligence systems learn by recognizing patterns of words in large datasets from many sources – including your own social media content, your comments, your likes and your online activity that serve as training signals for AI.

I believe there are AI users in healthcare who will benefit from Artificial Intelligence – and those users are physicians, nurses and other healthcare professionals who will very likely save significant charting time after a busy workday by choosing AI to summarize key points discussed during our respective appointments together. I do like the idea of saving a busy doctor’s valuable time. Both my oncologist and cardiologist use AI during our regular appointments – but only for creating a brief summary of  what our conversations covered during appointments. And I really appreciate that each of those docs always ask me first before selecting AI. When I get home and read their AI-generated notes printed for me afterwards, the basic content is passable at best compared to what I’ve just  heard live and in-person in their offices – yet rarely as tailored to my actual questions or concerns. 

Companies and organizations that keep trying to convince us that that shiny new technology is always the answer to every issue in health care are missing the point. It is not. 

.

  1. Al Hamid et al. “Gender Bias in Diagnosis, Prevention, and Treatment of Cardiovascular Diseases: A Systematic Review.” Cureus. 2024 Feb 15. 

Q:  What’s your take on the use of Artificial Intelligence in patient support groups? 

NOTE FROM CAROLYN:   I wrote much more about becoming a patient – no matter the diagnosis – in my book, A Woman’s Guide to Living with Heart Disease. You can ask for it at your local library or favourite bookshop, or order it online (paperback, hardcover or e-book) at Amazon –  or order it directly from my publisher, Johns Hopkins University Press (use their code HTWN to save 30% off the list price).

Your opinion matters. What do you think?