Outsourcing Bad Science: The Risks of Trusting AI for Scientific Research

Aiden Starling

Updated Thursday, August 15, 2024 at 12:00 AM CDT

A recent tweet by Carl T. Bergstrom has sparked a heated discussion on the reliability of AI in scientific research. The tweet features an image of a scientific paper titled "The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery" and criticizes the assumption that increasing the quantity of scientific papers equates to better scientific output. Bergstrom's tweet references Taylorism, a management philosophy focused on optimizing labor productivity through scientific methods, and draws a parallel to the flawed approach of maximizing scientific paper production.

The paper, authored by a diverse group of researchers including Qian Luo from the University of Missouri, Cen Liu from Fudan University, and Tomaso A. Poggio from MIT, explores the concept of automated scientific discovery using AI. The affiliations of the authors span prestigious institutions such as the University of Michigan, Oxford Immune Algorithmics, and the Center for Brains, Minds and Machines at MIT. Despite the impressive credentials of the authors, the paper has raised concerns about the efficacy and safety of relying on AI for complex scientific tasks.

One commenter, a professional in the AI field, warns against trusting large language models (LLMs) with critical tasks, citing safety testing results that suggest these models are not yet ready for such responsibilities. Another user points out that the scientific community must improve its peer review system to prevent the proliferation of low-quality, fraudulent research.

The skepticism is echoed by an engineer who laments the increasing amount of superficial and misleading scientific literature that hinders genuine innovation. The analogy of a dog paying for food with green leaves highlights the naivety of AI systems that mimic human actions without understanding the underlying principles.

The broader issue of academic pressure to publish frequently and achieve high citation counts is also discussed. This pressure can lead to unethical practices such as data fabrication and manipulated peer reviews, as evidenced by recent retractions of over 100 papers by PLOS ONE due to manipulated peer reviews.

Bergstrom's tweet and the ensuing discussion underscore the need for caution in the application of AI in scientific research. While AI has shown promise in areas like data analysis and predictive modeling, its role in generating and validating new scientific knowledge remains contentious. The scientific community must address these challenges to ensure that AI enhances rather than undermines the integrity of research.

For those interested, additional resources and discussions on this topic can be found on Carl T. Bergstrom's social media platforms and the website Retraction Watch, which monitors academic fraud and retractions.

As the debate continues, one thing is clear: the integration of AI into scientific research must be approached with rigorous scrutiny and ethical considerations to prevent the outsourcing of bad science.

Noticed an error or an aspect of this article that requires correction? Please provide the article link and reach out to us. We appreciate your feedback and will address the issue promptly.

View source: Imgur

Top Comments from Imgur

DrWhy9

Oh great another thing ai f***s up, other than helping scam artists and some very specific scientific research (for which most of the time they have to be specifically designed to do) are they doing any good?

BobTheWeak

The scientific community needs to fix its peer review system or all of this will become normal. Fake papers undermine the idea that new ideas are layered on top of older, verified ones.

KidneyBetrayal

Hi, this is from someone who is actually working in the AI field: QUIT TRUSTING LLMs WITH IMPORTANT S***. Anyone who says otherwise has never done safety testing on them. I have. They are not ready yet.

ZebAsiz

Anyone else see the first sentence and think this was some more crazyness about Taylor Swift at first? [This was the first time I've seen the term Taylorism.]

AnythingMuchShorter

You think we engineers like this? I use scientific papers when looking for discoveries that could be used to build solutions to real problems. The mass amounts of garbage that's fluffed up to look like it means something and gives no real substance already gets in my way. The last thing I need to 100 times the volume of c***.

lightbulbsgrowintotulips

This reminds me of a dog paying for food with green leaves… it doesn’t understand the underlying system, but it gets the motions and wants a hot dog please.

flatbanana

Partially the western uni ranking system and academic KPIs over number of papers published & citations has a hand in this

DeadWhiteMale

Fabricating methods, data, and references is fraud.

DisgruntledFerret

Google has gotten less accurate in recent years for similar reasons. If you find what you want on the first try, you're using Google less.

MarkRavingMad

AI is great at drafting memos,screening phone calls, and other clerical stuff you might want as features in your google or MS office suite. The fact that people are taking it and trying to use it for complex technical work is stupid to a degree I struggle to comprehend.

Check out our latest stories