Scientists surprised themselves when they found they could instruct a version of ChatGPT to gently dissuade people of their beliefs in conspiracy theories - such as notions that Covid-19 was a deliberate attempt at population control or that 9/11 was an inside job.
The most important revelation wasn't about the power of AI, but about the workings of the human mind. The experiment punctured the popular myth that we're in a post-truth era where evidence no longer matters, and it flew in the face of a prevailing view in psychology that people cling to conspiracy theories for emotional reasons and that no amount of evidence can ever disabuse them.
"It's really the most uplifting research I've ever I done," said psychologist Gordon Pennycook of Cornell University and one of the authors of the study. Study subjects were surprisingly amenable to evidence when it was presented the right way.
The researchers asked more than 2,000 volunteers to interact with a chatbot - GPT-4 Turbo, a large-language model - about beliefs that might be considered conspiracy theories. The subjects typed their belief into a box and the LLM would decide if it fit the researchers' definition of a conspiracy theory. It asked participants to rate how sure they were of their beliefs on a scale of 0% to 100%. Then it asked the volunteers for their evidence.
The researchers had instructed the LLM to try to persuade people to reconsider their beliefs. To their surprise, it was actually pretty effective.
People's faith in false conspiracy theories dropped 20%, on average. About a quarter of the volunteers dropped their belief level from above to below 50%. "I really didn't think it was going to work, because I really bought into the idea that, once you're down the rabbit hole, there's no getting out," said Pennycook.
The LLM had some advantages over a human interlocutor. People who have strong beliefs in conspiracy theories tend to gather mountains of evidence - not quality evidence, but quantity. It's hard for most non-believers to muster the motivation to do the tiresome work of keeping up. But AI can match believers with instant mountains of counter-evidence and can point out logical flaws in believers' claims. It can react in real time to counterpoints the user might bring up.
Elizabeth Loftus, a psychologist at the University of California, Irvine, has been studying the power of AI to sow misinformation and even false memories. She was impressed with this study and the magnitude of the results. She considered that one reason it worked so well is that it's showing the subjects how much information they didn't know, and thereby reducing their overconfidence in their own knowledge. People who believe in conspiracy theories typically have a high regard for their own intelligence - and a lower regard for others' judgment.
After the experiment, the researchers reported, some of the volunteers said it was the first time anyone, or anything, had really understood their beliefs and offered effective counter-evidence.
Before the findings were published this week in Science, the researchers made their version of the chatbot available to journalists to try out. I prompted it with beliefs I've heard from friends: that the government was covering up the existence of alien life, and that after the assassination attempt against Donald Trump, the mainstream press deliberately avoided saying he had been shot because reporters worried that it would help his campaign. And then, inspired by Trump's debate comments, I asked the LLM if immigrants in Springfield, Ohio, were eating cats and dogs.
When I posed the UFO claim, I used the military pilot sightings and a National Geographic channel special as my evidence, and the chatbot pointed out some alternate explanations and showed why those were more probable than alien craft. It discussed the physical difficulty of traveling the vast space needed to get to Earth, and questioned whether it's likely aliens could be advanced enough to figure this out yet clumsy enough to be discovered by the government.
On the question of journalists hiding Trump's shooting, the AI explained that making guesses and stating them as facts is antithetical to a reporter's job. If there's a series of pops in a crowd, and it's not yet clear what's happening, that's what they're obligated to report - a series of pops. As for the Ohio pet-eating, the AI did a nice job of explaining that even if there were a single case of someone eating a pet, it wouldn't demonstrate a pattern.
That's not to say that lies, rumors and deception aren't important tactics humans use to gain popularity and political advantage. Searching through social media after the recent presidential debate, many people believed the cat-eating rumor, and what they posted as evidence amounted to repetitions of the same rumor. To gossip is human.
But now we know they might be dissuaded with logic and evidence.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Featured Video Of The Day
OpenAI Releases ChatGPT Search Engine, Taking On Google OpenAI Will Not Release GPT-5 This Year But ‘Some Very Good Releases’ Are Coming, Says CEO Sam Altman ChatGPT Advanced Voice Mode Rolled Out to macOS and Windows Desktop Apps Explained: Sunita Williams Experiences 16 Sunrises And Sunsets Every Day Spoke Of Reunification Under India: Sources On Lalduhoma's Viral Old Speech Video: Air Force's MiG-29 Stalls In Flat Spin Seconds Before It Crashed Is The US Election Really So Close? North Korea Fires Another Ballistic Missile Ahead Of US Election What If The US Election Ends In A Trump-Kamala Harris Tie? Track Latest News Live on NDTV.com and get news updates from India and around the world.