After social media and targeted advertising manipulated people into making impulse decisions regarding their shopping and electoral habits, the future may have just got a little more dystopian. Researchers at the University of Cambridge claim that artificial intelligence (AI) tools could soon be used to manipulate the masses into making decisions that they otherwise would not. The study introduces the concept of an "intention economy," a marketplace where AI can predict, understand, and manipulate human intentions for profit.
Powered by large language models (LLMs), AI tools such as ChatGPT, Gemini and other chatbots, will "anticipate and steer" users based on "intentional, behavioural and psychological data". The study claimed that this new economy will succeed the current "attention economy," where platforms vie for user attention to serve advertisements.
"Anthropomorphic AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue," the research stated.
The study cited an example of an AI model created by Meta, called Cicero, that has achieved a human-like ability to play the board game Diplomacy which requires the participants to infer and predict the intent of opponents. Cicero's success shows how AI may have already learned to "nudge" conversational partners towards specific objectives which can effectively translate into pushing users online towards a certain product that the advertisers may want it to sell.
Selling right to influence?
The dystopia does not stop here. The research claims that this level of personalisation would allow companies such as Meta to auction the user's intent to advertisers where they buy the right to influence the decisions.
Dr. Yaqub Chaudhary from Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) emphasised the need to question whose interests these AI assistants serve, especially as they gather intimate conversational data.
"What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions," said Dr Chaudhary.
Also Read | 'Dystopian' AI Workplace Software That Tracks Every Move Has Employees Worried
Internet spooked
Safe to say, the findings have spooked the internet with users worried about what they had been sharing with the new-age chatbots.
"People are sharing much more personal info with AI than regular google search. The better it understands you, the easier you will be manipulated," said one user, while another added: "Now in other news, the Sun rises in the East and sets in the West."
A third commented: "This level of persuasiveness would be dangerous in the hands of the best government, and it's going to be in the hands of the worst."
The study calls for immediate consideration of these implications so that users can protect themselves from becoming unsuspecting victims of AI's evil intentions.
Track Latest News Live on NDTV.com and get news updates from India and around the world