August And September 2024 Calendar Printable

Better Health Care Newsletter – April 2024

Readers of this newsletter know we’re big fans of high-quality medical research. Well-designed, statistically robust studies have saved lives, changed medical practice, and made the world a better place.

August and September  Printable Calendar Template
August and September Printable Calendar Template

But the corollary is that low-quality research, weakly designed, based on appealing anecdotes but illusory evidence, is bad for all of us.

Yet telling the difference between great and not-so-great research is hard, even for doctors. And when researchers actually lie, and publish fake data for personal gain — well, a special spot in hell awaits these individuals.

August and September  Printable Calendar Template
August and September Printable Calendar Template

This month, we take a deeper dive into the underside of medical research, and we focus on fakes and scandals and other bad stuff that sully the whole research field.

Misinformation is a threat to many aspects of democratic life, none more so than medical care. Our goal this month is to help savvy patients safeguard themselves from shoddy medical-scientific studies and their potential health harms. Knowing something about the skeevy steps that twist up published papers can save invaluable time, money, pain, and grief.

Top photo credit: National Cancer Institute via Unsplash

The sadly prominent problem of shoddy research

Consider some recent high-profile cases of retracted research studies from prominent places:

§ The Dana-Farber Cancer Institute in Boston, an affiliate of the Harvard Medical School and preeminent in its field, has retracted six studies and told medical journals to correct dozens of others. As the New York Times reported: “In many cases … images in the papers had been stretched, obscured, or spliced together in a way that suggested deliberate attempts to mislead readers. The studies … included some published by Dana-Farber’s chief executive, Dr. Laurie Glimcher, and its chief operating officer, Dr. William Hahn.”

§ The president of Stanford University stepped down from his elite post after months of scrutiny of medical-scientific research he led over years. As the New York Times reported of the scholarly-professional rebuke by an independent oversight panel that led Marc Tessier-Lavigne to leave his presidency but to stay as a tenured biology professor: “The review, conducted by an outside panel of scientists, refuted the most serious claim involving Dr. Tessier-Lavigne’s work — that an important 2009 Alzheimer’s study was the subject of an investigation that found falsified data and that Dr. Tessier-Lavigne had covered it up. The panel concluded that the claims ‘appear to be mistaken’ and that there was no evidence of falsified data or that Dr. Tessier-Lavigne had otherwise engaged in fraud.

“But the review also stated that the 2009 study, conducted while he was an executive at the biotech company Genentech, had ‘multiple problems’ and ‘fell below customary standards of scientific rigor and process,’ especially for such a potentially important paper. As a result of the review, Dr. Tessier-Lavigne was expected to request substantial corrections in the 2009 paper, published in Nature, as well as in [?] another Nature study. He also said he would request retraction of a 1999 paper that appeared in the journal Cell and two others that appeared in Science in 2001.”

§ Memorial Sloan Kettering Cancer Center announced in February that it is investigating a Columbia University cancer surgery research chief and a junior cancer biologist over more than two dozen studies they have published. Dr. Sam S. Yoon, a lead investigator on the works, has asserted they would lead to improved cancer care. But as the New York Times reported, the duo’s work — with many other researchers globally — “is shot through with suspicious data.” Four suspect studies have just been retracted and fifth has a “stern” warning note attached to it. In one study already retracted, the newspaper reported: “Identical constellations of cells were said to depict separate experiments on wholly different biological lineages. Photos of tumor-stricken mice, used to show that a drug reduced cancer growth, had been featured in two previous papers describing other treatments. Problems with the study were severe enough that its publisher, after finding that the paper violated ethics guidelines, formally withdrew it within a few months of its publication in 2021. The study was then wiped from the internet, leaving behind a barren web page that said nothing about the reasons for its removal. As it turned out, the flawed study was part of a pattern.”

The list runs on, including controversies surrounding: the call by Harvard Medical School and Brigham and Women’s Hospital to retract 31 studies with falsified or fraudulent data and published by a prominent cardiac researcher regarding stem cells and heart muscle regeneration … the pulled and questionable studies by a researcher at the renowned M.D. Anderson Cancer Center in Houston on spices, plants and their purported cancer-fighting properties … the saga over the “dark [cancer] matter” studies of Dr. Carlo Croce at Ohio State University and subsequent retractions, corrections, and concerns over research management by the member of the august National Academy of Sciences.

During the pandemic, regular folks got a ringside seat to the academic pugilism as researchers raced to expand the scientific knowledge about a deadly, novel, and global infection. With lives, reputations, and big money at stake, researchers and institutions hurried publication along to an eager, often-confused public.

This resulted in bad research getting published about medications that were supposed to “cure” Covid-19 but were harmful instead. One “study” involved just three dozen patients for a matter of days, and another involved a purported big-data collection of patient information.

The firestorm that erupted over medical-scientific research and published studies during the pandemic combined with a bungled handling of the infection that eventually claimed more than 1 million American lives. Misinformation, including unfounded utterances by the then-leader of the free world, abounded, and it has done serious damage to the credibility of medicine and science, experts say.

That harm lives on and is not easy to combat. Let’s not forget that key aspects of it — denial about the proven effectiveness and value of vaccination against infectious diseases — traces to grievous falsity in published research by a disgraced British doctor about the supposed connection between childhood vaccinations and autism.

Big forces can warp research

Greed. Ego. Hubris. Mendacity. Sadly, these all-too-human traits pop up in every field. With the stakes so high in medical research, it’s unsurprising that misbehavior occurs.

But critics say major forces play an increasing role in unmooring the credibility and responsibility that professionals and the public alike should expect in a life-saving field like research. These factors include:

Darwinian publishing demands

The pressure has grown crushing on medical-scientific researchers to publish or perish. Their careers and livelihoods force them to scrap for what critics say is too small a pool of grants for serious studies. Much of the medical dollar in this country, of course, goes for direct and indirect patient care, or to a lesser degree clinical trials, especially those backed by Big Pharma.

The most lucrative parts of the financial pie devoted to research gets dominated by big-name principals — critics say these are disproportionately older white men. A hierarchical system is built on their wringing cheap labor from “colleagues” who are younger, female, and minorities. Still, to attract needed attention, funding, and other resources — and perhaps to meet requirements in their own workplaces — researchers produce a firehose of publications.

The volume of this material has become breathtaking. It also raises questions about how vast numbers of the studies, especially those in medicine, benefit a field that is legendary for being conservative, slow to change, and pokey at best in absorbing and applying knowledge and innovation.

A ‘publication’ bonanza

The very way that studies get disseminated has changed dramatically — and not in positive ways, critics say. Publishing enterprises for medical-scientific studies have popped up around the planet. Many of these spurious journals care not a whit for established practices that helped to build the credibility of rigorous research. New journals — dubbed by some in the field as “paper mills” — may not ask established peers to carefully review researchers’ work. They may not set nor hold to their own publication standards, ethics, or disclosure requirements (such as having researchers show their underlying data completely or give information about potential conflicts of interest). Indeed, they may be soliciting submissions, getting favorable reviewers for them, and perhaps pocketing payments from those who would benefit from research publication (including authors, funders, or those with products or practices mentioned in studies).

With the rise of the internet and the right-now timeliness that online audiences can demand, even more traditional medical-scientific journals and their editors have pushed toward faster, intermediate web posting of research. Researchers and the publications promise to update works with additions, corrections, and fuller information as works in progress advance. This doesn’t always occur. Critics also note that the global reach of the internet has led some journal publishers to just leave blank web pages online with no explanation after studies get challenged and retracted. In the meantime, of course, others may have relied on such works. The bad information spreads.

Image tweaks made easy

Technology has revolutionized how the world takes photographs and videos, as anyone who owns a smart phone knows. But in the editing of those images lies a lot of potential mischief. Objects can be made to disappear or move in pictures. Parts of them can be zoomed in on, enlarged, and made central. Users can manipulate pictures at will, both their own and others. These advances were prevalent even before artificial intelligence promised to allow far greater image manipulation.

But too few users of increasingly powerful imagery tools also learn about truthfulness and integrity. When researchers clean up, clarify, or “pretty up” images in their studies, they can create nightmares for the credibility of their studies, critics say.

As watchdogs try to sniff out falsity or fraudulence, the images in submissions can be especially tough to vet. It requires not only expertise in the subject matter but also a trained eye and formidable recall for details. And as more works come into responsible journals, the number of reviewers who also can scrutinize and evaluate images is not great.

Foibles of peer review

A word is in order, too, about the purported safeguard of peer review, which is often held up as an important bar to shoddy studies. Critics long have called on the medical-scientific establishment to fess up about the shortcomings in peer review.

Too often, especially internally in institutions, already beleaguered investigators are required to volunteer — with minimal or no extra pay — to read over others’ work, particularly that of junior personnel. The enlightened do so with diligence, earnestness, and the desire to improve a study or help a colleague. But reviewers also can fulfill their duty by nitpicking, focusing on issues like formatting or citation form, rather than examining the data, analysis, calculation, methods, conclusions — and biases — in studies.

Professional journals may provide minimal honoraria for peer reviewers, effectively telling elite experts that they’re commanding their time — work that can take hours or even days — for what a neighborhood kid makes for flipping burgers. Reviewers also must keep in mind their reputational stake when undertaking critical scrutiny of others’ efforts: Is it worth tangling with a peer if a study is not up to par? If an eminence in the field has cranked out a clunker, what’s the price for calling her out? How deep a dig is really called for, especially if a work has big-data or extensive calculation and statistical analysis attached to it?

How patients can safeguard themselves

Go figure: More than 3 million articles get published annually in medical-scientific journals, experts estimate.

For patients, though, the crucial question might be: What am I doing with this study in my hands now? What might it mean to me, my loved ones, and my health and medical care?

There’s some good news about helping with critical queries like these. Popular media have profiled prominent watchdogs of the field, Good Samaritans who have the expertise and resources to raise alarms about skeevy studies (see below).

But, ultimately, patients themselves can play a crucial role in examining research and helping to decide if studies have salience for them.

Let’s start at the top: This study you’re looking at right now — where did it come from? Did the doctor treating you give it to you? Did it come from a reputable, well-known source, like a handful of well-known and respected medical journals? Did you use the federal government database called PubMed, which gives patients and experts a head start on tens of millions of bio and life science articles and studies (with a high degree caution still needed!)?

What are the authors’ credentials? Does their training match the topic on which they are writing? If not, is there a clear reason why they might be considered to have expertise in another field?

What does an online search tell you about the investigators’ institutions — are they well-known and respected academic medical centers or teaching hospitals? Research facilities or specialized think tanks? Do you find citations for other works of theirs in the appropriate area? Do your queries show that the researchers are cited often by others? Look around a little at the study to see if it has a so-called COI section — in other words, have the researchers publicly disclosed if they have potential conflicts of interest, perhaps in their funding? Do they disclose financial support from Big Pharma, device makers, or other special interests, especially in the medical-scientific field?

The American Association of Family Physicians has put online a helpful guide to breaking down typical published research, explaining its various parts and what readers should expect to find in them. The federal government, as part of the sizable Medline section of the PubMed database, provides a robust online guide to breaking down medical-scientific studies and grasping their parts and their prospective significance.

It may be tough for regular folks to vet voluminous data or complex analyses, including major computational work, as is contained in many research works. But do the researchers provide this material someplace in an important effort to provide transparency about their work?

For regular folks, a few fast tests of the relevance of published studies to their lives can be revelatory:

§ Does the research focus on humans or animals, particularly lab rats? It can be a long, far stretch for findings on animals to be tested and proven to affect humans in useful ways.

§ What is the N, or sample size, involved in the study? In his book, Thinking Fast and Slow, Daniel Kahneman (a Nobel Prize winner who died a few weeks ago) details problems everyone experiences with cognitive biases. He explains what he drolly calls “the power of small numbers,” how researchers can be bedeviled by erroneous conclusions derived from studying data from groups that are too small or few in number. It’s an all-too-common human failure. We look, for example, at nine U.S. Supreme Court justices and decide that only a handful of Ivy League law schools must be the best places to study law. Or we see a trio of sculpted dudes at the gym and start taking the same supplements they do. Or we buy the kids certain computer software because it’s what a few A+ fellow students use.

§ Look closely: Does the study actually say its findings directly affect an illness or condition? Or does the work address a key, or potentially important aspect or process of same? Sure, real and dramatic breakthroughs occur. More likely is slow progress before everything falls together for “Aha!” moments. Critics have raised concern about studies that examine “surrogate endpoints,” which a federal government website helpfully defines as “an indicator or sign used in place of another to tell if a treatment works.” Patients can quickly find concrete reasons to wonder about approaches or therapies based on these indicators. They can help to shrink tumors, for example. But they also may not extend patients’ lives. Is it worth mortgaging loved ones’ lives for weeks or months of treatment with incremental outcomes? But without giving patients, donors, and funders some indication that progress is occurring, how else will money keep flowing for studies, researchers and institutions ask back?

Watchdogs worth watching

With the millions of medical and scientific articles flooding out each year, experts estimate that doctors — a vital audience — would need to put in 27 hours daily just to stay current.

The good news is that a handful of watchdogs have emerged as credible overseers of problematic studies, especially those coming from what should be gold-standard places. Experts have tried to set rigorous standards to advance the collective knowledge, protect subjects and users (other researchers, patients, clinicians, and institutions), and propel beneficial work.

The federal government has its Office of Research Integrity to work with academic and other research institutions to police the proper use of billions of dollars in U.S. research funding annually. The agency website lists the 10 or so cases annually in which researchers have been sanctioned for misbehavior. The Wall Street Journal reports that ORI is pushing the research community to toughen up its oversight and punishment for misconduct.

Ivan Oransky, a teacher of medical journalism at NYU’s Arthur L Carter Journalism Institute, estimates that many more studies should be “skinned back” — that is retracted, corrected, or pulled from circulation. For comparison, Retraction Watch, the site he runs with Adam Marcus, says that the steeply rising number of studies handled this way — 5,500 as found by 2022 — represents a “vast undercount of how much misconduct and fraud exists” in medical-scientific research.

He offers some hope that increasing vigilance will quash the spread of more shoddy studies, writing in an Op-Ed in a British newspaper:

“Retractions have risen sharply in recent years for two main reasons: first, sleuthing, largely by volunteers who comb academic literature for anomalies, and, second, major publishers’ (belated) recognition that their business models have made them susceptible to paper mills – scientific chop shops that sell everything from authorships to entire manuscripts to researchers who need to publish lest they perish.”

The New Yorker magazine profiled microbiologist Elisabeth Bik and her sharp-eyed, mostly voluntary efforts to sleuth out problematic images in thousands of sketchy studies.

A new group of watchdogs, the New Yorker and other media have reported, have been empowered in part by technology’s rise. Researchers have anonymously raised red flags about suspect studies on an online, moderated web site called PubPeer. This site and social media posts referring to it have provided early alarms and increasing momentum for needed examinations of iffy studies.

The speed and reach of the internet also have given new force to the efforts of Samaritans like Sholto David, 32, who is a Ph.D. in cellular and molecular biology from Newcastle University in England and lives in Pontypridd, Wales. He told the New York Times that he is a scientist who wants to see better, more credible work thrive in his field. That has led him to become a sleuth of fraudulent images, including those he found in since-retracted studies by the prestigious Dana-Farber Cancer Institute.

By the way, the increasing number of research watchdogs and professional journals themselves have been assisted by the latest, buzz-worthy kind of tech these days: artificial intelligence, Ars Technica magazine has reported. The evolving software is far from perfect. But it can scan huge volumes of images and help humans discover those that show signs of manipulation.

Integrity diminished by delayed fixes and failed reporting

Two bad, less publicized practices have eroded the integrity of medical-scientific research.

Confronted with challenges to published work, researchers and institutions have slow-walked responses, only grudgingly correcting or retracting studies.

Too many researchers take another problematic route with studies that fail or that don’t show results as expected: They simply do not publish this equally important information.

This is detrimental to advancements in areas of concern — and it may violate federal laws affecting grants awarded for clinical trials, which are heavily regulated studies involving humans.

Charles Piller reported extensively about this problem in his investigation for Stat, a medical and science news site, writing almost a decade ago:

“Stanford University, Memorial Sloan Kettering Cancer Center, and other prestigious medical research institutions have flagrantly violated a federal law requiring public reporting of study results, depriving patients and doctors of complete data to gauge the safety and benefits of treatments, a STAT investigation has found. The violations have left gaping holes in a federal database used by millions of patients, their relatives, and medical professionals, often to compare the effectiveness and side effects of treatments for deadly diseases such as advanced breast cancer.

“The worst offenders included four of the top 10 recipients of federal medical research funding from the National Institutes of Health: Stanford, the University of Pennsylvania, the University of Pittsburgh, and the University of California, San Diego. All disclosed research results late or not at all at least 95% of the time since reporting became mandatory in 2008.”

That investigation prompted a flurry of discussion and promised remedies. But an article published in a medical journal in May 2021, found laggard reporting still too prevalent, despite federal authorities having the power to levy $10,000-a-day fines:

“Although compliance with reporting requirements has improved, there have been numerous reports that many sponsors have continued to violate the law. A 2020 report in Science that involved scrutiny of more than 4,700 clinical trials found that although most large drug companies and some universities had markedly improved compliance, less than 45% had their results reported early or on time to ClinicalTrials.gov. Of 184 sponsor organizations with reporting for at least 5 trials due as of September 2019, 30 companies, universities, or medical centers never met a single deadline.

“Another analysis of more than 4,200 trials published by The Lancet last year described overall compliance with the law as poor and showed no improvement since 2018. Noting that findings ‘raise important questions around lack of enforcement and the need for public accountability,’ the authors said that they would maintain updated compliance data for trials and sponsors at FDAAA TrialsTracker, a website created by the Evidence-Based Medicine DataLab at the University of Oxford in the United Kingdom. FDAAA TrialsTracker currently says that about 28% of 9,937 clinical trials have not reported their findings to ClinicalTrials.gov, and that at this level of noncompliance, the FDA could have imposed penalties exceeding $19 billion.”

By 2023, the federal Food and Drug Administration’s own web site about complying with U.S. laws requiring reporting of clinical trial results showed fewer than a half dozen individuals or institutions had been formally warned of problems.

The same pokey pace displayed in dealing with research reporting failures has plagued efforts to correct or retract problem studies, Science magazine has reported in an article detailing a decade-long drive by critics to remedy 300 studies by a pair of Japanese doctors as published in 78 journals:

“The critics’ efforts to correct the record, which they detail in a [published paper] in Accountability in Research, offer a high-profile example of familiar problems in scientific publishing. Retractions come slowly — often years after complaints arise, if at all — in part because journals may defer to institutional investigations, which can be slow, unreliable, or absent. Journals’ decisions also lack transparency. As such, efforts to track the fate of suspect papers are vital to ‘ensure that journal articles represent a robust and dependable body of evidence,’ says Ursula McHugh, an anesthesiologist at St James’s Hospital in Dublin who has studied retractions.”

To underscore the point for patients made in the quote from McHugh, junky studies hang around for too long, relied on by others and polluting potential advancement. As Piller’s work and the follow-up by others also makes clear, researchers must report their findings — including their failures — so others don’t keep going down unhelpful investigative paths. The stall in reporting clinical trial data is agonizing because, as Piller pointed out, missing information from studies included those involving potentially life-or-death treatments.