Over the years, I’ve had to teach myself the bare bone basics of interpreting cardiac studies. I’m certainly no research scientist (although I did spend 20 years of my life with one – does that count at all?) but I can tell you that one good place I like to start is the methodology section of any study.
Wait! Don’t leave yet! I know, I know, this may seem crushingly dull. But the methods info is how I learned, for example, that out of over 5,000 participants recruited for the $100 million ISCHEMIA study in 2019, only 23 per cent were women. At the time, I offered a helpful editing suggestion to the Washington Post about their sensational coverage of ISCHEMIA (“Stents and Bypass Surgery are No More Effective Than Drugs!!” ) by requesting this important clarifier added to the end of that headline: “FOR MEN!” . .
The reality is that you simply cannot offer up credible scientific conclusions about how research results will affect women’s health care when you are deliberately under-representing over half the population in your study.
And yet this gender gap keeps happening – especially in heart research.
Despite growing awareness (and even funding requirements) that women must be represented equitably across all areas of medical research, we still see that women’s participation “has been the lowest in cardiovascular trials”, as McGill University researchers Drs. Louise Pilote and Valeria Raparelli wrote in 2018.
I often feel the urge to offer my own helpful clarifiers about many studies out there after reading their methodology sections. These typically include information about:
- the participants (most researchers now prefer that word over the previously used “subjects”) such as age, race, gender, and the number of participants
- the materials used by researchers to gather and analyze data from the study (plus a list of sources used and why these sources were essential for this study)
- the procedures used (such as collection of data or the instructions given to participants)
Consider, for example, a study with a small number of participants. Ten people is considered the reasonably minimal number. But honestly, can a group of ten people really call itself a scientific study? Ten people is not a study – it’s a (pre-COVID) family birthday dinner at my house, which similarly has no business posing as research. The health journalism watchdog Health News Review shares this warning about small medical studies:
“Be vigilant when writing about them, and skeptical when reading about them.”
I feel a similar urge to offer helpful edits when reading about animal or cell studies in the research laboratory (with an apologetic nod to my friends working in these labs). Whenever I see flashy headlines announcing yet another miracle medical breakthrough, but based only on furry subjects living in little metal cages, I try to correct this intentional obfuscation with a disclaimer, like:
“GOOD NEWS IF YOU’RE A MOUSE!”
Call me when you move into human trials.
There are a number of specific issues surrounding the experiences of laboratory animals vs. humans, but none are more frightening than this: animal models are poor predictors of treatment safety in humans.
A review of 221 animal experiments, for example, found agreement in later human studies only 50% of the time, which means it’s essentially random.1 You could flip a coin with equal predictive accuracy.
Thalidomide was one of the most dramatic examples of this problem during my lifetime – a drug approved in Canada and other countries in the 1950s to treat morning sickness in pregnant women. Thalidomide didn’t cause limb deformity birth defects in baby animals in the lab, but those deformities did happen to over 10,000 babies of pregnant humans.
And don’t even get me started on the wholesale hyping of unpublished, non-peer reviewed medical study results, often distributed in a flood of press releases created by Communications Department staff, only to be picked up verbatim by understaffed and overworked media newsrooms. The trouble with most press releases is the tendency toward over-hyped benefits coupled with under-reporting of potential problems. Again, check out Health News Review for the dangers of questionable science delivered by press release.
In a study published in the British Medical Journal (BMJ), we learned that exaggerated reporting of medical studies can be traced back to the press releases submitted by 20 leading U.K. universities where the study authors work. Forty per cent of the press releases they investigated included health advice that was not actually found in the original paper. And 36 per cent over-inflated the relevance of animal or cell studies to humans.
Here’s a cardiac study (submitted to, accepted by and published in a real cardiology journal) that I like to mention, mostly because I was so intrigued by its findings a few years ago. It was reported by senior researchers at Harvard and Massachusetts General Hospital in Boston, who called it the Gratitude Research in Acute Coronary Events (GRACE) study.2
Acute Coronary Syndrome (ACS) is a term used to describe any dangerous drop in blood flow feeding the heart muscle. If you’ve been diagnosed with a heart attack, it’s likely that the first potential diagnosis entered into your medical chart in the Emergency Department was ACS. Cardiac symptoms are assumed to be ACS until proven otherwise.
The Boston team knew that within the first year post-ACS diagnosis, about 20 per cent of ACS patients will either be re-admitted to the hospital for cardiac emergencies – or they’ll be dead.
They also knew from previous studies that positive psychological well-being is associated with reduced patient mortality.3 So they decided to focus on positive factors like optimism and gratitude for their GRACE study.
Here’s what the Boston researchers learned: the trait of optimism was independently associated with greater physical activity and reduced rates of cardiac re-admissions to the hospital after six months of follow-up. Gratitude was not associated with improved outcomes.
I was about to write a blog post about the GRACE study findings at the time, but a couple of niggling limitations of this study were bothering me. For example, the researchers followed participants for only six months, despite current stats that suggest the first year is the time period we worry about.
Why weren’t the participants followed up for at least 12 months to address this limitation? Astonishingly, this disconnect between medical interventions approved for longterm use in chronic illness and the short-term studies their approvals were based on are common (e.g. the Swedish drug study in which “longterm” meant just one year of follow-up). See also: Our Cardiac Meds – in Real Life, Not Just in Studies
The most glaring limitation, however, was that 84 per cent of the GRACE study participants were white males. The researchers admitted that “the lack of significant differences in outcomes by sex and race may have been due to the relatively small number of women and non-White participants.”
Duh. . .
They further explained that, by the middle of study recruitment, they had even considered limiting the enrollment of white men, but decided that “having a greater number of total participants was the greatest priority.”
Oh. Well then, okay. . .
Does this excuse simply let researchers off the hook when they continue to submit papers for publication based on lopsided participant populations in which women and/or racial minorities are deliberately unrepresented?
I understand the pressure on medical researchers to “publish or perish”. I really do. And I have long suspected that this relentless stress might help to explain why all the really good research topics have already been taken.
As my irreverent Mayo Clinic heart sister Laura Haywood-Cory (who survived a heart attack at age 40 caused by Spontaneous Coronary Artery Dissection) once asked in response to a 2011 Heart Sisters post:
“We really don’t need yet another study that basically comes down to: ‘Sucks to be female. Better luck next life!’, do we?”
Well, Laura – apparently we do. Because those studies just keep on coming.
Like most published medical research, the GRACE study includes the standard CYA clause at the end: “future studies are required.” Maybe those future studies will include an accurate acknowledgement of the difference between longterm and short term, and more attention to recruiting an adequately representative gender/racial balance.
But it’s not just me expecting this improvement. For all researchers counting on National Institutes of Health funding, it’s the law now, as the NIH/U.S. Department of Health and Human Services website warns:
“All NIH-funded clinical research must address plans for the inclusion of women and minorities within the application or proposal. The primary goal of this law is to ensure that research findings can be generalizable to the entire population.”
The entire population. Is it too much to expect that women and minorities must be considered part of the “entire population”?
While you’re pondering that trick question, consider educating yourself. Check out Health News Review’s excellent user-friendly toolkit called Tips for Analyzing Studies, Medical Evidence and Health Care Claims.
And until you do that, please don’t forward to your friends the latest medical miracle breakthrough that’s just been “published” on Facebook – because the fact is that it’s likely neither a miracle, nor a breakthrough.
1. P. Perel, I. Roberts, E. Sena, et al. “Comparison of treatment effects between animal experiments and clinical trials: systematic review.” BMJ, 334 (2007), pp. 197-203
2. Huffman JC, Beale EE, Celano CM, et al. “Effects of Optimism and Gratitude on Physical Activity, Biomarkers, and Readmissions After an Acute Coronary Syndrome: The Gratitude Research in Acute Coronary Events Study.” Circ Cardiovasc Qual Outcomes. 2016;9(1):55-63.
3. Chida Y, Steptoe A. “Positive psychological well-being and mortality: a quantitative review of prospective observational studies.” Psychosom Med. 2008 Sep; 70(7):741-56.
Image: Enrique Meseguer, Pixabay
Q: What would you like to see future medical research focus on?
NOTE FROM CAROLYN: I wrote more about what to look for in cardiac research in my book, “A Woman’s Guide to Living with Heart Disease“ , published by Johns Hopkins University Press. You can ask for it at your local library or favourite bookshop, or order it online (paperback, hardcover or e-book) at Amazon, or order it directly from my publisher (use their code HTWN to save 20% off the list price of my book).
– Educate yourself! Health News Review’s excellent user-friendly toolkit called Tips for Analyzing Studies, Medical Evidence and Health Care Claims
– Understanding Clinical Trials: A Jargon Busting Guide – a useful non-scientist’s glossary of medical research terminology from Marie Ennis- O’Connor