AI Scams Surge: Voice Cloning And Deepfake Threats Sweep India

Scammers gather victims' contact details, names, and other relevant information to make the fake call seem legitimate.

Advertisement
Read Time: 6 mins
It's alarming how the mechanics of fraud are fast evolving (AI Generated Representative Image)

"Help me, Appa!"

Those words echoed in the ears of a retired banker from Madurai last month as his son's voice pleaded through the phone. He recalls how the call earlier this year left him shaken. "It wasn't the ransom that scared me; it was the thought of what might be happening to him," he said, asking for anonymity. The ransom demand, just 5,000 rupees seemed oddly low, which made him stop and check on his son. That's when he found out his son was safe.

Not everyone has been as fortunate.

Last year, a scammer used an AI-generated deepfake video to dupe 40,000 rupees from PS Radhakrishnan, a 73-year-old retired government employee in Kozhikode, Kerala. The con blended voice and video manipulation to create a convincing emergency that led to his loss.

The problem goes much deeper. In Delhi, cybercriminals used voice cloning to swindle 50,000 rupees from Lakshmi Chand Chawla, a senior citizen in Yamuna Vihar. On October 24, Chawla received a WhatsApp message claiming his cousin's son had been kidnapped. To make it believable, the criminals played a voice recording of the child-cloned using AI-begging for help. Panicked, Chawla transferred rupees 50,000 via Paytm. It wasn't until he contacted his cousin that he realized the child was never in danger.

These cases show how scammers exploit AI to prey on people's trust. Scammers aren't anonymous voices anymore; they sound like loved ones in crisis.

Independent researcher Rohini Lakshane explains the growing danger:

"Fraudulent calls that employ voice cloning are extremely concerning. There has been a push for the adoption of digital financial transactions in India, and many people, even those with rudimentary digital literacy, now use payment systems such as UPI. On the other hand, awareness of digital safety and security in general is low. Also, India already experiences a high rate of cybercrime, digital scams, and online fraud, but there is relatively little prosecution of these criminals or justice for victims. AI-facilitated crime is going to compound this."

It's alarming how the mechanics of fraud are fast-evolving.

Scammers gather victims' contact details, names, and other relevant information to make the fake call seem legitimate. They then use social engineering tactics, playing on the emotions of the victim's loved ones.

"One cannot prevent the voice sample from being harvested because scammers can collect it via a "wrong number" phone call or from the internet (social media posts such as Instagram Reels or YouTube videos), TV clips, etc.," adds Rohini.

It also creates a "liar's dividend." Once this scam becomes commonplace enough, people will tend to doubt even real distress calls and may spend valuable time verifying the call rather than helping the person in distress.

Advertisement

Facecam.ai: Pushing the Boundaries of Deception

Voice cloning scams are alarming enough, but the danger doesn't stop there. The rise of deepfake technology pushes the boundaries even further, blending reality and digital manipulation in ways that are increasingly hard to detect. What started with voice cloning is now evolving into real-time video deception.

One striking example was Facecam.ai, a tool that could create live-streaming deepfake videos using just a single image. It quickly went viral, showcasing the ability to convincingly mimic a person's face in real-time.

Advertisement

Users could upload a photo and swap faces seamlessly in video streams. Despite its popularity, the tool was shut down after a backlash over its potential for misuse. But this doesn't mean the problem has been solved. Numerous other platforms offer similar capabilities, all with dangerous potential.

While Facecam.ai has been taken down, other tools like Deep-Live-Cam continue to thrive. These programs allow swapping faces in video calls, enabling users to impersonate anyone-whether a celebrity, politician or even someone's friend or family member. And the technology is getting more accessible by the day, allowing even those with minimal technical skills to pull off convincing deepfakes.

Advertisement

The risks go beyond mere pranks. Fraud, manipulation, and reputational damage are all just a few clicks away. As these tools become more widespread, the potential for harm grows exponentially.

The widespread availability of these tools makes controlling their misuse nearly impossible, increasing the chances of fraud and digital manipulation in the future.

Advertisement

The consequences of these scams are already being felt globally. In a high-profile case last year, scammers in Hong Kong used a deepfake video to impersonate a company's CFO, resulting in a financial loss of over $25 million. The rise of such technology means that now anyone, not just high-profile individuals, could fall victim.

With AI blurring the line between real and fake, we are entering a new era where deception has never been easier and its consequences never more scary.

One possible solution being discussed is the idea of Personhood Credentials-a system to verify and prove that the person behind a digital interaction is, well, a real person. Srikanth Nadhamuni, the CTO of Aadhaar, is a strong advocate for this. He co-authored the paper titled "Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online" in August this year. Nadhamuni believes that in a world where deepfakes and voice cloning are growing threats, systems like Aadhaar-built on biometric verification-could help ensure that every online interaction is genuinely human.

But is it really that simple?

Not everyone thinks so. Rohini Lakshane, an independent researcher, points out some real concerns. She says, "Personhood credentials might stop some scams but also open up serious privacy issues. What happens if these credentials are misused or denied?" She's right-India has already seen cases where people were wrongly declared dead or denied access to essential services due to identity verification issues.

Imagine a world where your ability to speak online, make payments, or interact digitally is tied to a credential controlled by governments or corporations. Rohini warns that this could lead to a dystopian scenario where people without these credentials are silenced or left out. Worse, with deepfakes becoming so common, we might reach a point where even real cries for help are doubted because no one knows what's real anymore.

So, where does that leave us?

On one side, AI technology is getting better at mimicking our voices and faces, fooling even those closest to us. On the other hand, we have solutions like Personhood Credentials, which might help but could also create new problems around privacy, accessibility, and trust.

As AI advances, finding the balance between security and freedom is becoming more crucial. Can personhood credentials solve the problem of AI-fueled deception? Or will they just create new challenges we're not ready to face?

Featured Video Of The Day
Ukraine Uses UK's 'Storm Shadow' On Russia
Topics mentioned in this article