Artificial intelligence tools are being blamed for fabricating references to medical studies that never existed, according to new research published in The Lancet.
A recent audit uncovered thousands of fake references hiding among millions of biomedical papers, raising concerns about the integrity of research that guides medical decisions.
The review found that more than 4,000 citations in the biomedical literature led to nonexistent studies.
Researchers warned that these fabricated sources could distort the clinical guidelines doctors depend on when treating patients.
Here's What They're Not Telling You About Your Retirement
Maxim Topaz, an associate professor at the Columbia School of Nursing and the study's lead author, said the growing wave of phony citations is alarming for clinical care.
"When those fake references are making it into the literature, they will end up in those guidelines, and that's how doctors decide how to provide care for you," he told CBS News.
"Your doctor could be making decisions around treatment based on studies that never existed," he added, calling the findings deeply troubling.
Equally concerning, Topaz said, is the fact that none of the errors his team identified have been retracted or corrected. That means many could still be quietly influencing real-world patient care.
This Could Be the Most Important Video Gun Owners Watch All Year
Topaz explained that the problem is accelerating quickly.
"The rate of fake references showing up in published medical literature is growing," he said, adding that the number of false citations has climbed 12-fold over the past three years. The team traced fabricated references across nearly 3,000 academic papers.
The investigation began after Topaz personally encountered the issue.
While using an AI program to improve one of his own scientific manuscripts, the system inserted a completely made-up citation. The fake reference went unnoticed through several rounds of peer review before an editor caught the error.
"I was mortified, because I've been studying AI for the past 15 years," Topaz said.
"So if it can happen to me, it can happen to anyone."
He described how these mishaps occur: when researchers write a statement of fact and ask AI systems to supply supporting citations, a fabricated reference can slip in. "In some cases, AI would slip those in, inadvertently," he said.
"You would hope the facts are accurate, but if they are supported by fabricated citations, you don't know if the 'facts' are accurate."
Sometimes, the AI tool cites a real author while inventing a study attributed to that person. In other instances, the referenced work is entirely imaginary.
Topaz believes such cases represent only a fraction of a broader issue. "This is just the tip of the iceberg," he said, suggesting that similar problems may exist in other academic fields as well.
He also warned that these fabricated references can "look perfectly real," complicating efforts to detect them.
The findings underscore what he described as an urgent need for researchers, editors, and reviewers to rigorously fact-check every citation that AI tools generate.
Join the Discussion
COMMENTS POLICY: We have no tolerance for messages of violence, racism, vulgarity, obscenity or other such discourteous behavior. Thank you for contributing to a respectful and useful online dialogue.