NEW DELHI, Sept 14: A ‘conversation’ with a chatbot lasting under 10 minutes was found to mellow down conspiracy theorists, believing some of the most deeply entrenched conspiracies, such as those involving the COVID-19 pandemic, a new study has found.
A team, including researchers from the Massachusetts Institute of Technology, US, designed the AI-model ChatGPT to be highly persuasive and engage participants in tailored dialogues, such that participants’ conspiratorial claims were factually rebutted by the chatbot.
For this, the researchers leveraged advancements in GPT-4, the large language model powering ChatGPT, that “has access to vast amounts of information and the ability to generate bespoke arguments.” A form of artificial intelligence, a large language model is trained on massive amounts of textual data and can, therefore, respond to users’ requests in the natural language.
The team found that among over 2,000 self-identified conspiracy believers, conversations with the AI-model ChatGPT lowered the average participant’s beliefs by about 20 per cent.
Further, about a fourth of the participants, all of whom believed a certain conspiracy previously, “disavowed” it after chatting with the AI-powered bot, the researchers said.
“Large language models can thereby directly refute particular evidence each individual cites as supporting their conspiratorial beliefs,” the authors wrote in the study published in the journal Science.
In two separate experiments, the participants were asked to describe a conspiracy theory they believed in and provide supporting evidence, following which they engaged in a conversation with the chatbot.
“The conversation lasted 8.4 min on average and comprised three rounds of back-and-forth interaction (not counting the initial elicitation of reasons for belief from the participant), …” the authors wrote. A control group of participants discussed an unrelated topic with the AI.
“The treatment reduced participants’ belief in their chosen conspiracy theory by 20 per cent on average. This effect persisted undiminished for at least 2 months…” the authors wrote.
Lead author Thomas Costello, an assistant professor of psychology at the American University, said, “Many conspiracy believers were indeed willing to update their views when presented with compelling counter evidence.”
Costello added, “The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation — and was also adept at being amiable and building rapport with the participants.”
The researchers explained that until now, delivering persuasive, factual messages to a large sample of conspiracy theorists in a lab experiment has proved challenging.
For one, conspiracy theorists are often well-informed about the conspiracy — sometimes more so than skeptics, they said, and added that conspiracies also vary widely, such that evidence supporting a certain theory can differ from one theorist to another.
“Previous efforts to debunk dubious beliefs have a major limitation: One needs to guess what people’s actual beliefs are in order to debunk them — not a simple task,” said co-author Gordon Pennycook, an associate professor of psychology at Cornell University, US.
“In contrast, the AI can respond directly to people’s specific arguments using strong counter evidence. This provides a unique opportunity to test just how responsive people are to counter evidence,” Pennycook said. As society debates the pros and cons of AI, the authors said that the AI’s ability to connect across diverse topics within seconds can help tailor counter arguments to specific conspiracies of a believer in ways that aren’t possible for a human to do. (PTI)