Could an AI chatbot talk you out of believing a conspiracy theory?

Technology vs. cat-eating myths.
By
Rebecca Ruiz
 on 
A man looks at this computer, surrounded by red lines connecting with data points.
A new study suggests that an AI chatbot can talk people out of their conspiracy theory beliefs. Credit: Bob Al-Greene / Mashable

Given the presidential debate this week, you probably heard plenty of misinformation and conspiracy theories. 

Indeed, reporters and fact checkers were working overtime to specifically determine whether Haitian immigrants in Ohio were eating domestic pets, as grotesquely alleged by Republican presidential contender Donald Trump, and his vice presidential running mate, Ohio Senator J.D. Vance. Neither has produced evidence proving their claim, and local officials say it's untrue. Still, the false allegation is all over the internet.

Experts have long worried about how rapidly conspiracy theories can spread, and some research suggests that people can't be persuaded by facts that contradict those beliefs.

But a new study published today in Science offers hope that many people can and will abandon conspiracy theories under the right circumstances. 

In this case, researchers tested whether conversations with a chatbot powered by generative artificial intelligence could successfully engage with people who believed popular conspiracy theories, like that the Sept. 11 attacks were orchestrated by the American government and that the COVID-19 virus was a man-made attempt by "global elites" to "control the masses." 

The study's 2,190 participants had tailored back-and-forth conversations about a single conspiracy theory of their choice with OpenAI's GPT-4 Turbo. The model had been trained on a large amount of data from the internet and licensed sources.

After the participants' discussions, the researchers found a 20 percent reduction in conspiracy belief. Put another way, a quarter of participants had stopped adhering to the conspiracy theory they'd discussed. That decrease persisted two months after their interaction with the chatbot. 

David Rand, a co-author of the study, said the findings indicate people's minds can be changed with facts, despite pessimism about that prospect.

"Facts and evidence do matter to a substantial degree to a lot of people."
- David Rand, MIT professor

"Evidence isn't dead," Rand told Mashable. "Facts and evidence do matter to a substantial degree to a lot of people."  

Mashable Top Stories
Stay connected with the hottest stories of the day and the latest entertainment news.
Sign up for Mashable's Top Stories newsletter
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Rand, who is a professor of management science and brain and cognitive sciences at MIT, and his co-authors didn't test whether the study participants were more likely to change their minds after talking to a chatbot versus someone they know in real life, like a best friend or sibling. But they suspect the chatbot's success has to do with how quickly it can marshal accurate facts and evidence in response. 

In a sample conversation included in the study, a participant who thinks that the Sept. 11 attacks were staged receives an exhaustive scientific explanation from the chatbot about how the Twin Towers collapsed without the aid of explosive detonations, among other popular related conspiracy claims. At the outset, the participant felt 100 percent confident in the conspiracy theory; by the end, their confidence dropped to 40 percent.  

For anyone who's ever tried to discuss a conspiracy theory with someone who believes it, they may have experienced rapid-fire exchanges filled with what Rand described as "weird esoteric facts and links" that are incredibly difficult to disprove. A generative AI chatbot, however, doesn't have that problem, because it can instantaneously respond with fact-based information.

Nor is an AI chatbot hampered by personal relationship dynamics, such as whether a long-running sibling rivalry or dysfunctional friendship shapes how a conspiracy theorist views the person offering counter information. In general, the chatbot was trained to be polite to participants, building a rapport with them by validating their curiosity or confusion. 

The researchers also asked participants about their trust in artificial intelligence. They found that the more a participant trusted AI, the more likely they were to suspend their conspiracy theory belief in response to the conversation. But even those skeptical of AI were capable of changing their minds. 

Importantly, the researchers hired a professional fact-checker to evaluate the claims made by the chatbot, to ensure it wasn't sharing false information, or making things up. The fact-checker rated nearly all of them as true and none of them as false. 

For now, people who are curious about the researchers' work can try it out for themselves by using their DebunkBot, which allows users to test their beliefs against an AI. 

Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform. Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms.

Rand said the study's success, which he and his co-authors have replicated, offers an example of how AI can be used for good. 

Still, he's not naive about the potential for bad actors to use the technology to build a chatbot that confirms certain conspiracy theories. Imagine, for example, a chatbot that's been trained on social media posts that contain false claims.

"It remains to be seen, essentially, how all of this shakes out," Rand said. "If people are mostly using these foundation models from companies that are putting a lot of effort into really trying to make them accurate, we have a reasonable shot at this becoming a tool that's widely useful and trusted."

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.


Recommended For You
Meta's 'M3GAN' chatbot is a nightmare for moviegoers
M3GAN in "M3GAN 2.0."

DOGE is reportedly developing an AI chatbot to analyse government contracts
lon Musk's personal X account displayed on an iPhone in the foreground, while the official X profile of the Department of Government Efficiency (DOGE) is visible in the background.

Is dating an AI chatbot considered cheating?
composite of two people looking at each other debating with the chatgpt icon between them

Meta's AI chatbot is coming to Europe, with limitations
Meta AI


More in Life


Save on groceries at Amazon: Get $5 off when you spend $20
Amazon groceries on green and yellow abstract background

This baby shower season, Target is tacking $30 gift cards onto $100 diaper and wipes purchases
Parent changing baby on table and grabbing wipe with hand


Trending on Mashable
NYT Connections hints today: Clues, answers for April 11, 2025
Connections game on a smartphone

Wordle today: Answer, hints for April 11, 2025
Wordle game on a smartphone

'Black Mirror' fans, be warned: DO NOT start with 'Common People'
Chris O'Dowd and Rashida Jones star in "Black Mirror: Common People."

NYT Strands hints, answers for April 11
A game being played on a smartphone.

The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!