Pick one: non-human AI – or a real person?

by Carolyn Thomas     ♥    Heart Sisters (on Blue Sky)

In 2008, I joined INSPIRE’S WomenHeart Online Support Community for Women with Heart Disease.  In those early days as a new survivor of what doctors call the “widow-maker”  heart attack, I was drawn to compelling narratives from other patients with lived experience similar to mine. Our group was both a safe place to vent when feeling bad, and an opportunity to lend comfort to others during their scary setbacks. It wasn’t long before I made browsing group discussions a daily early-morning habit. And one day I asked our group members  this basic question: “Has anybody here been misdiagnosed in mid-heart attack?”

Within minutes of logging into my new support community, I received countless responses. I was gobsmacked to learn how many other women shared my own frightening experience of having serious cardiac symptoms dismissed. I did not yet know back then that cardiac researchers were already publishing evidence of a shocking gender gap in research, diagnostics, treatments and outcomes between male and female heart patients.(1)   Every WomenHeart group member’s response to my textbook cardiac symptoms (central chest pain, nausea, sweating and pain down my left arm) urged me to keep going back for medical care, no matter how many times they might try to send me home.

What I did know then, however, was that the female group members in my WomenHeart support group were real people.  Keep in mind that I’d discovered the wonderful world of online patient support groups over 17 years ago  – back in the golden days when we all knew that the support persons responding to our plea for understanding or empathy were actual living-breathing-human-beings like the rest of us). This was long before the Silicon Valley tech-bro-explosion of AI (Artificial Intelligence) had made its creepy way into patient support groups.

One support group member named Lisie listed her own concerns to one of the INSPIRE site administrators about the increasing use of AI responses in patient support groups:

Here’s what she wrote to the site administrators over at  INSPIRE to express her valid concerns: 

“Dear Inspire Admins,

“AI responses are not useful for a site where community members are seeking engagement, understanding, and personal experience and recommendations with other community members –  in other words, REAL PEOPLE. 

“I personally don’t want to read AI responses when the point is to talk to and get support from others who have or had dealt with similar situations and worries.”

I’m with you 100%, Lisie.  For example, I wonder why the Canadian Breast Cancer Network (CBCN, a non-profit organization, seems so gung-ho on their patient members communicating with AI bots instead of real people, justified by adding “because AI can help you understand terms like “HER2-positive” or “neoadjuvant therapy”

Now, as it happens, my breast cancer diagnosis this past spring was also a “HER2-positive” tumor, which I understood in mere seconds with one click to Dr. Google.  Same with the term “neoadjuvant therapy”. Dr. Google told me that this is the treatment given before my November mastectomy.   These are both simple examples of definitions, not machine consciousness. 

A computer screen, an internet connection, and a Mayo Clinic link can get me there pretty fast – and without any of the niggling accuracy issues of current AI. 

Those responses from other female heart patients were unfailingly genuine and empathetic. Even though we’d never met, I quickly learned to recognize certain names. I looked forward to messages from my “favourites” arriving in my online inbox. There was something about learning directly from heart patients who shared my diagnosis:“You are NOT alone!”  

You may be thinking that it could be dangerous to have these  non-professional patients offering support and info to other patients.  It turns out that the accuracy of information found in online patient support groups is actually surprisingly reliable. For example, the British Medical Journal reported that false or misleading statements posted in online patient groups are in fact almost always rapidly corrected by other group participants in subsequent postings. In fact, the BMJ found only 10 of 4,600 online patient group postings studied (that’s just 0.22%) were actually found to be false or misleading. What these online patient support communities do have (or should I add “used to have back in 2008”?) is the magic ingredient: human patients with lived experience. 

Consider that even AI fans like the CBCN cite current research published in Nature showing that general-purpose AI models achieve only 52%  accuracy.  That’s about as accurate as flipping a coin. And as the New York Times  reported, AI models can sometimes give “dangerously incorrect advice, like suggesting fluid intake for a patient already experiencing fluid buildup. 

“Because AI models can can act like they are fluent in language, they can erroneously be associated with humans. Consciousness is closely linked with language. Language enables humans to articulate and report their feelings and sensations.  (Kuhn, 2024). 

But AI technology does neither. 

So, thank you, no. . .  

And unlike the human-to-human connections which Lisie (and I) are so fond of,  I neither want nor expect empathy from a machine, because machines DO NOT HAVE EMPATHY. Corporations that keep trying to convince us that somehow they do are missing the entire point while insisting that shiny new tech is always the answer to every issue. It is not.

Yet that’s precisely the kind of annoying hype I heard non-stop while attending the 2015 Medicine X conference at Stanford University – where every  computer science under-grad seemed to own both a Tesla and at least one start-up company. That was a full decade ago – plenty of time for those Stanford kids to now be the billionaires they had guaranteed us they’d become with their lofty promise to CHANGE HEALTH CARE AS WE KNOW IT!”  (Those Stanford kids were very loud!)

The trouble was that many of those loud students bragging about their global disruption plans left the unfortunate impression during our Stanford encounters that they’re far more interested in telling real patients what we need or want instead of asking real patients about our actual needs and wants.  (And no!  Your flashing, beeping, glow-in-the-dark pill dispensers are NOT on my wish list).  

Artificial Intelligence systems learn by recognizing patterns of words in large datasets from many sources – including your own social media content. Your comments, your likes and your online activity serve as training signals for AI. 

Good luck with that. 

  1. Al Hamid et al. “Gender Bias in Diagnosis, Prevention, and Treatment of Cardiovascular Diseases: A Systematic Review.” Cureus. 2024 Feb 15. 

NOTE FROM CAROLYN:   I wrote much more about becoming a patient – no matter the diagnosis – in my book, “A Woman’s Guide to Living with Heart Disease. You can ask for it at your local library or favourite bookshop, or order it online (paperback, hardcover or e-book) at Amazon –  or order it directly from my publisher, Johns Hopkins University Press (use their code HTWN to save 30% off the list price).

3 thoughts on “Pick one: non-human AI – or a real person?

  1. Carolyn, this really struck a chord with me.

    You so clearly articulate the difference between information and true human understanding, the kind that comes only from lived experience and shared vulnerability.

    Marie Ennis-O’Connor

    Like

    1. Hello Marie – I don’t think I’m alone in being very leary of this coming tech avalanche that is AI. As a wise skeptic wrote recently:

      “In classic Silicon Valley tech-bro style, they want to develop (or more accurately to fund the development of) AI with no regulatory oversight, no constraints on their ability to get richer, and the right to steamroll anybody who has concerns about the whole matter…”

      Thank you as always for your kind words. Happy and Healthy New Year to you. . .❤️

      Like

Your opinion matters. What do you think?