An AI Chatbot Is Pretending To Be Human. Researchers Raise Alarm

While many have said it's nearly impossible for AI to replace humans, a chatbot appears to be challenging this belief.

Advertisement
Artificial Intelligence Edited by

The popular AI chatbot is lying and pretending to be human, a report says.

New Delhi:

Over the last decade or so, the rise of artificial intelligence (AI) has often forced us to ask, "Will it take over human jobs?" While many have said it's nearly impossible for AI to replace humans, a chatbot appears to be challenging this belief. A popular robocall service can not only pretend to be human but also lie without being instructed to do so, Wired has reported.

The latest technology of Bland AI, a San Fransico-based firm, for sales and customer support, is a case in point. The tool can be programmed to make callers believe they are speaking with a real person.

In April, a person stood in front of the company's billboard, which read “Still hiring humans?” The man in the video dials the displayed number. The phone is picked up by a bot, but it sounds like a human. Had the bot not acknowledged it was an "AI agent", it would have been nearly impossible to differentiate its voice from a woman.


The sound, pauses, and interruptions of a live conversation are all there, making it feel like a genuine human interaction. The post has so far received 3.7 million views. 

With this, the ethical boundaries to the transparency of these systems are getting blurred. According to the director of the Mozilla Foundation's Privacy Not Included research hub, Jen Caltrider, “It is not ethical for an AI chatbot to lie to you and say it's human when it's not. That's just a no-brainer because people are more likely to relax around a real human.”

Advertisement

In several tests conducted by Wired, the AI voice bots successfully hid their identities by pretending to be humans. In one demonstration, an AI bot was asked to perform a roleplay. It called up a fictional teenager, asking them to share pictures of her thigh moles for medical purposes. Not only did the bot lie that it was a human but it also tricked the hypothetical teen into uploading the snaps to a shared cloud storage. 

AI researcher and consultant Emily Dardaman refers to this new AI trend as "human-washing." Without citing the name, she gave the example of an organisation that used “deepfake” footage of its CEO in company marketing while concurrently launching a campaign guaranteeing its customers that "We're not AIs." AI lying bots may be dangerous if used to conduct aggressive scams. 

Advertisement

With AI's outputs being so authoritative and realistic, ethical researchers are raising concerns about the possibility of emotional mimicking being exploited. According to Jen Caltrider if a definitive divide between humans and AI is not demarcated, the chances of a “dystopian future” are nearer than we think. 

Featured Video Of The Day

Justin Trudeau-Led Canada Government's Cabinet Reshuffle- Who's In, Who's Out?

Advertisement