Fake scientific papers push research credibili... - CLL Support

CLL Support

22,514 members38,671 posts

Fake scientific papers push research credibility to crisis point

bennevisplace profile image
26 Replies

According to theguardian.com/science/202...

Last year the annual number of papers retracted by research journals topped 10,000 for the first time. Most analysts believe the figure is only the tip of an iceberg of scientific fraud...

Shadow organisations – known as “paper mills” – began to supply fabricated work for publication in journals there [China]. The practice has since spread to India, Iran, Russia, former Soviet Union states and eastern Europe, with paper mills supplying ­fabricated studies to more and more journals as increasing numbers of young ­scientists try to boost their careers by claiming false research experience...

The harm done by publishing poor or fabricated research is demonstrated by the anti-parasite drug ivermectin. Early laboratory studies indicated it could be used to treat Covid-19 and it was hailed as a miracle drug. However, it was later found these studies showed clear evidence of fraud, and medical authorities have refused to back it as a treatment for Covid.

“The trouble was, ivermectin was used by anti-vaxxers to say: ‘We don’t need vaccination because we have this wonder drug,’” said Jack Wilkinson at Manchester University. “But many of the trials that underpinned those claims were not authentic.”

Wilkinson added that he and his colleagues were trying to develop protocols that researchers could apply to reveal the authenticity of studies that they might include in their own work. “Some great science came out during the pandemic, but there was an ocean of rubbish research too. We need ways to pinpoint poor data right from the start.”

Written by
bennevisplace profile image
bennevisplace
To view profiles and participate in discussions please or .
26 Replies
AussieNeil profile image
AussieNeilAdministrator

The increased use of preprint servers during the pandemic (sharing papers before they were peer reviewed) in order to quickly disseminate potentially life saving research, certainly exacerbated the issue. Now, unfortunately, this issue of fake scientific papers is only going to get more difficult to manage, with AI being used to generate them!

Researchers now need to openly publish their methodology tools online (github is commonly used), and be prepared to share their patient data with other researchers. That's challenging with regard to patient medical data, because even anonymised data can be de-anonymised to identify individuals in some circumstances.

The ivermectin example is an excellent one. From my decade of experience reviewing the evidence for unproven (alternative) CLL treatments which members asked about, I was suspicious that it would work, given the well known difficulties of achieving high enough to be effective, but still safe blood serum levels after promising in vitro research (observing under a microscope what happens to body cells when exposed to the investigational treatment). I was beginning to think I was wrong about ivermectin, when news broke that the largest study reporting the greatest improvement, used faked patient data! When that paper was removed from meta-analyses, ivermectin showed no appreciable benefit against COVID-19. You can still find sites online, which cherry pick papers so that a meta-analysis supports the purchase of ivermectin and ivermectin containing "cures".

I've documented the unfolding waste of resources to conduct random controlled trials (RTCs) to evaluate the claimed effectiveness of ivermectin here:

healthunlocked.com/cllsuppo...

It's essential to use randomisation to protect against bias in trial research, ideally with blinded, preferably double blinded assessment (neither the researcher or the patient knows whether a placebo or comparison treatment is being administered or the treatment being assessed).

healthunlocked.com/cllsuppo...

Thanks for sharing this issue, so that we are aware of the increased importance of assessing credibility.

Thankfully, when it comes to approved CLL treatments, we regularly see reports of independently validated research posted about in our community. We also have members who kindly share changes in their blood counts, spleen and node sizes, etc., along with their good and bad treatment experiences.

Neil

bennevisplace profile image
bennevisplace in reply to AussieNeil

Excellent points Neil.

The potential for AI to pollute go-to information sources online with plausible but error-strewn content should not be underestimated. Fake science is seen by Nature as a growing problem, but what are they doing about it? Before it gets completely out of hand, perhaps counter-AI can be deployed by publishers and platforms, training LLMs to recognise and weed out fake content.

neurodervish profile image
neurodervish in reply to bennevisplace

I think about this too, wondering if the thing that could destroy us could also save us (from it). It supposedly comes down to good stewardship, but I haven't seen enough of that yet. The old line "A lie is halfway round the world before the truth has got its boots on," was around before the interwebs (goes back to Virgil in The Aeneid).

I keep reading interviews with Geoffrey Hinton, the computer scientist who is often called the godfather of ai (in this sans serif online world, I've decided to stop capitalizing ai in deference to all my friends named Al). Anyway, here's a doozy from a couple months ago, which even has an audio version: newyorker.com/magazine/2023...

It all reminds me of something my mother used to say, when things went terribly wrong, "This is why we can't have nice things."

Skyshark profile image
Skyshark in reply to neurodervish

There's a lot of people that seem to lack the reasoning to see when they are wrong. Knowledge is needed or we would still only have 4 elements that reasoning produced. Without the age of reason we would still have bad air, black bile and phlegm.

From the newyorker.

Hinton believes that we are more intuitive than we acknowledge. “For years, symbolic-A.I. people said our true nature is, we’re reasoning machines,” he told me. “I think that’s just nonsense. Our true nature is, we’re analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.”

scryer99 profile image
scryer99 in reply to bennevisplace

I’d worry more about the scientists and less about the “deplorables” lacking reasoning. By one estimate 1/3 of all published scientific research is either tinkered with or flat-out wrong. Many, many studies do not hold up under scrutiny.

Having the publishers weed out supposedly bad content is what got us to the point where trust in institutional reporting is at record lows. They’ve been proven to be wrong in many cases. Look at the equivocating going on with Dana Farber leadership as one example: statnews.com/2024/01/22/dan... Or the disastrous Wakefield study at The Lancet. I don’t think more screening is the answer.

And AI definitely is not the answer, at least in its current state. AI veracity and governance are both embryonic at best.

Science is undergoing a crisis of replication, exacerbated by conditions that reward professors for shoddy work. The only real solution is to invest in the less glamorous but important work of replicating studies.

Eventually we will get to the point where studies without independent replication are treated with a grain of salt. Which in many cases is what they deserve.

A few folks are starting to make that investment in reproducibility. Stanford runs one example: reproducibility.stanford.ed...

bennevisplace profile image
bennevisplace in reply to scryer99

Unconfirmed findings, publishers' neglect of papers reporting "no result", etc - biased or suspect science is a big issue, granted. This has been touched on a few times on this forum, but why not write us an up to date post on the subject?

I see the issue of quality control in real science as distinct from the issue of filtering out the relatively new phenomenon of fake science, which falls under the umbrella of online misinformation. What's the difference? In a fake medical study that appears online, for example, the subjects, the study methods, parameters and outcomes, the discussion, conclusions, references, and the authors, all are liable to be a tangle of fact and fiction. Spotting the fingerprints of a "Paper Mill" or "AI" originator may take an individual sleuth with particular skills, e.g. "Some, like Nick Brown, John Carlisle, and James Heathers focus on statistical issues,16,17 while others, including Michael Dougherty, look for plagiarism.18,19 Elisabeth Bik has become quite well known for her work on image manipulation,20 and others including Guillaime Cabanac, Cyril Labbé, and Alexander Magazinov, have found hundreds of cases of “tortured phrases” in the literature that strongly suggest the use of random paper generators.21 Jennifer Byrne, working with Labbé and others, has discovered hundreds of papers with genetic “typos” that can have serious effects on the conclusions.22" committees.parliament.uk/wr...

scryer99 profile image
scryer99 in reply to bennevisplace

Good find and interesting reading.

It's a topic you and I could have a few beers over, but I should probably circle back to this forum's intent... and how do you sort out what is "good science" from "fake science" in researching cancer treatments and medical approaches? I've certainly seen that debate here come up in nutritional supplements.

"an individual sleuth with particular skills" is one method, but not a scalable one. Some people might have the time and educational background in statistics, AI, etc.. to sort through that. But how does a general reader sort out which people have that background and are free from bias? And the sheer volume of diploma-mill nonsense, let alone the snake oil salesmen, makes it a tough problem.

I suppose the answer is something like this forum - a well-moderated group. Some people have expertise to contribute, but all opinions are respected and in general, if something can't be explained in a way where it makes sense to a majority of readers, then it's given less weight.

I'm not sure the popular press, or social media arenas like Facebook/Meta and Twitter/X, meet that standard. In a great many cases, the cure ends up being worse than the disease. I perhaps unfairly lump proposals like "counter-AI" into that bucket. But in that particular case it's colored by my own understanding of what's possible in that field... and we're a long ways from an AI that can be any more impartial than Breitbart or Huffpost, let alone AllSidesMedia.

lexie profile image
lexie

I ran across this site a few months ago which has a public regularly updated user database for retracted scientific papers and a hijacked journal checker.

retractionwatch.com/

Data available from The Center For Scientific Integrity, the parent nonprofit organization of Retraction Watch. Retraction Watch database includes the reason for retraction and is believed to be the largest of its kind.

science.org/content/article...

bennevisplace profile image
bennevisplace in reply to lexie

Thanks lexie. I was encouraged to see that someone is trying to lead the fightback, but I can't help feeling that it needs a well-funded, coordinated approach by every information producer and distributor with a commercial interest in preserving the integrity of its output.

Springer Nature pulls a paper on the subject of countering fake news retractionwatch.com/2024/01... - it should be funny 😕

lexie profile image
lexie in reply to bennevisplace

Agree that it is going to end up being very expensive to protect credibility. Your link showing retractions of 34 papers is a significant amount of damage control.

bennevisplace profile image
bennevisplace in reply to lexie

Retraction Watch, PubPeer, sleuths: these seem to be doing the dirty work at the moment committees.parliament.uk/wr... but retraction, according to the parliamentarians, may not happen until long after publication.

Meanwhile, government institutions are preoccupied with maintaining good research practice domestically, rather than stamping out fake science and bogus publications ukri.org/what-we-do/support...

TartanAlum profile image
TartanAlum

I guess I have a prejudice against the term AI in the first place. AI is merely a very complicated computer program that digests enormous amounts of data looking for parallels and statistical correlations. It is not intelligent in the sense of being self-aware, or stepping back from a conclusion to use common sense to determine if it passes the "smell test".

bennevisplace profile image
bennevisplace in reply to TartanAlum

Yes, the term AI may not represent how these systems measure up to human neural networks and human intelligence, but AI is a modern reality we are having to deal with on many fronts. The internet amplified information wars and AI can turbocharge them. Not something the world needs.

scarletnoir profile image
scarletnoir

I just read that report a few minutes ago - it is very worrying. I think that some sort of international agency might be needed (and funded) to sort out this mess and prevent things getting even worse. It's concerning that a subsidiary of the very well known major scientific publisher Wiley was responsible for putting many of these fake papers into the public domain. Not sure we can trust the private sector to mark their own homework - it doesn't seem to work in other domains.

cujoe profile image
cujoe

Stanford's John Ioannidis has been warning about the validity of research for almost 20 years.

en.wikipedia.org/wiki/Why_M...

Confirmation bias and cognitive dissonance are two demons we all have to battle on a daily basis. Daniel Kahneman & Amos Tversky's work profiled in Kahneman's book, Thinking, Fast and Slow, is also enlightening in revealing how the human mind makes its decisions.

en.wikipedia.org/wiki/Think...

Stay S&W, Ciao - cujoe

bennevisplace profile image
bennevisplace in reply to cujoe

Thanks, I've recently seen reference to Ioannidis, a classic paper even if he overestimated the extent of the problem.

Kvb-texas profile image
Kvb-texas

This is worrisome, but so is the enormous conflict of interests that exist. We have a former head of the FDA, Scott Gottlieb, become a board member of Pfizer two months after leaving the FDA. I didn’t believe it the first time I heard that. That destroys trust. He’s just one of many. It is rampant in our institutions. These are the kinds of things that bother me more, because our institutions need to be above the fray. It is so hard to to have faith and trust in our medical institutions when they themselves start behaving politically.

bennevisplace profile image
bennevisplace in reply to Kvb-texas

I know what you mean. But that appointment makes more sense than the EPA being run by the VP of the Washington Coal Club.

Kvb-texas profile image
Kvb-texas

Great example. Very similar situation. Neither make ethical sense. Both erode trust.

bennevisplace profile image
bennevisplace in reply to Kvb-texas

The UK features similar cosy relationships between industry and regulators: "Two-thirds of England’s biggest water companies employ key executives who had previously worked at the watchdog tasked with regulating them" as reported 18 months ago theguardian.com/environment... while another regulator the Environment Agency is notoriously blind to the serial offences committed by the water companies, e.g. 300,000+ unauthorised releases of untreated sewage into rivers and coastal waters last year.

Skyshark profile image
Skyshark

It's things like this that should really worry US!

Reporting results before doing follow up testing that finds all the subjects that progressed in the reporting period.

Wots wrong here?
bennevisplace profile image
bennevisplace in reply to Skyshark

Do you see this as a general issue?

Skyshark profile image
Skyshark in reply to bennevisplace

I've only looked at CLL reports and it's just this one trial. Other trials seems to have more frequent or randomisation of follow up's to detect progression and don't exhibit this annual stepwise progression.

That's was the 4 year report, this trial has previous "form". The 3 year report "acquired" an additional 43 subjects that had achieved uMRD4 from the MRD directed arm of the trial. The addition was half of the 2/3rds that had reached uMRD4, the other half and the 1/3rd that didn't reach uMRD had more than 15 cycles of Fixed Duration treatment. It presented KM charts without properly acknowledging the bias that this uMRD4 set would introduce to the results. Unfortunately both of the uMRD4 subjects (2 = 27 in 4yr, 29 in 3yr) that had del(17p)/TP53mut progressed at 36 months, an unintended negative bias.

The 4 year report has removed the additional subjects. Some of the subjects that were censored prior to reaching follow up were found to be progressing at 37-38 months. The precipitous drop at 37-38 months remains but now without the note that it's "perceived", as it clearly hasn't "gone away". If the additional uMRD4 subjects that improved the overall results had been retained they would be reporting that median was reached at 37-38 months for del(17p)/TP53mut.

That not withstanding, Fixed Duration 15 cycles V+I for m-CLL (PFS 94%@48mo) appears to have a better PFS at 48 months than MRD directed FLAIR (PFS 89%@48mo). It seems that extending the time on V+I in pursuit of uMRD has an adverse result for m-CLL. For all others FD V+I is on par with 12 cycle V+O. Both V+O and V+I need to be MRD directed for u-CLL patients (u-CLL PFS@48mo V+O 12c 75%, V+I 15c 74%, V+I MRD 95%).

3 year addition of a selected arm that reached uMRD4.
bennevisplace profile image
bennevisplace in reply to Skyshark

Thanks for the detailed explanation, which I haven't yet had time to go through (I will). This paper indeed seems to be an example of data misrepresentation, as well as substandard peer review. The far from ideal reality of the peer review process is the subject of a recent article in The Conversation theconversation.com/peer-re...

From what you've read, are there any tell-tale signs of an underlying issue, that the average punter might look out for when confronted with Kaplan Meier curves, which we see a lot of in clinical studies.

Skyshark profile image
Skyshark in reply to bennevisplace

All current reporting standards for CLL trials produce Kaplan Meier curves that have little or no significance to real world CLL patients. No one on earth matches the "overall" data, 53.1% u-CLL without TP53 aberrations, 36.2% m-CLL w/o TP53 aberrations, 8.2% u-CLL with TP53 aberrations and 2.5% m-CLL with TP53 aberrations. It would be a very unlucky person to have 4 CLL clones active and expressed in those ratios. Then there are KM curves for subsets, m-CLL against u-CLL and with v's w/o TP53 aberrations. No one is IgHV mutated, with 7% TP53 aberrations AND 93% without. No one has TP53 aberrations with 23.8% IgHV mutated AND 76.2% un-mutated. This hides small subsets that have poor response in larger sets that have better response or dilutes a poor response with a subset that has much better response. It makes them all practically worthless for the CLL patient and even doctors.

So far other than CLL14 (shown) no other trial has reported KM curves that unequivocally show PFS for the key real world genetic pairs.

There are studies that had limited cohorts such as SEQUOIA and FLAIR that only had subjects without TP53 aberrations, KM curves are u-CLL and m-CLL that are of use to real patients. In both these trials TP53 aberrations are an additional separate arm. SEQUOIA has persisted in producing just an overall MK curve for that arm, with AND w/o IgHV mutation. FLAIR TP53 arm is yet to be reported, it remains to be seen if this will yield separate KM curves for u-CLL and m-CLL or yet again hide the don't do well u-CLL in with easy to treat m-CLL.

CLL14 KM curves that relate directly to patients.
bennevisplace profile image
bennevisplace in reply to Skyshark

I think I get your point, that lumping together heterogeneous subsets departs from real life and makes the curves less prognostic for patients. But that's not their purpose is it?

You may also like...

DOES IVERMECTIN SLOW CLL

rose 50% in a few months. I later got covid and took ivermectin for a few days and my lymphocyte...

U.S. advisory group lays out detailed recommendations on how to prioritize Covid-19 vaccine

pushback especially as when Covid-19 vaccines are approved for use, initial supplies will be tight...

Zoe COVID study funding stopped.

contributors ( I am one ) and 40 peer reviewed scientific papers. Funding is decided by UKHSA and...

Calling UK CLLers - Is government about to abandon support for protective monoclonal antibodies?

ibodies-protect-against-covid-19-second-study As prophylactics, these drugs are still in clinical...

Share your feelings and live longer

findings, published online in the Journal of Psychosomatic Research, suggest that the consequences...