Advertisement

Made In China AI Is Getting More Persuasive

Recent advances in this nascent technology from China are raising fresh national security concerns.

Made In China AI Is Getting More Persuasive

Do you think engaging with an emerging tech tool can change your firmly held beliefs? Or sway you toward a decision you wouldn't have otherwise made? Most of us humans think we're too smart for that, but mounting evidence suggests otherwise.

When it comes to a new crop of generative artificial intelligence technology, the power of "persuasion" has been identified as a potentially catastrophic risk right alongside fears that models could gain autonomy or help build a nuclear weapon. Separately, lower-stakes designs meant to influence behavior are already ubiquitous in the products many of us use everyday, nudging us to endlessly scroll on social platforms or open Snapchat or Duolingo to continue a "streak."

But recent advances in this nascent technology from China are raising fresh national security concerns. New research funded by the US State Department and released by an Australian think tank found that Chinese tech companies are on the cusp of creating and deploying technologies with "unprecedented persuasive capabilities."

From a security perspective, this could be abused by Beijing or other actors to sway political opinions or sow social unrest and division. In other words, a weapon to subdue enemies without any fighting, the war tactic heralded by the Chinese philosopher General Sun Tzu.

The report from the Australian Strategic Policy Institute published last week identified China's commercial sector as "already a global leader" in the development and adoption of products designed to change attitudes or behaviors by exploiting physiological or cognitive vulnerabilities. To accomplish this, these tools rely heavily on analyzing personal data they collect and then tailor interactions with users. The paper identified a handful of Chinese firms that it says are already using such technology - spanning generative AI, virtual reality, and the more emerging neurotechnology sector - to support Beijing's propaganda and military goals.

But this is also very much a global issue. China's private sector may be racing ahead to develop persuasive methods, but it is following playbooks developed by US Big Tech firms to better understand their users and keep them engaged. Addressing the Beijing risk will require us to properly unpack how we let tech products influence our lives. But fresh national security risks, combined with with how AI and other new innovations can quickly scale up these tools' effectiveness, should be a wake-up call at a time when persuasion is already so entrenched into Silicon Valley product design.

Part of what makes addressing this issue so difficult is that it can be a double-edged sword. A Science study published earlier this year found that chatting with AI models could convince conspiracy theorists to reduce their beliefs, even among those who said they were important to their identity. This highlighted the positive "persuasive powers" of large language models and their ability to engage with personalized dialogue, according to the researchers.

How to prevent these powers being employed by Beijing, or other bad actors, for nefarious campaigns will be an increasing challenge for policymakers that goes beyond cutting off access to advanced semiconductors.

Demanding far more transparency would be one way to start, by requiring tech companies to provide clear disclosures when content is tailored in a way that could influence behaviors. Expanding data protection laws, or giving users clearer ways to opt-out of having their information collected, would also limit the ability of these tools to individually target users.

Prioritizing digital literacy and education is also imperative to raise awareness about persuasive technologies, how algorithms and personalized content work, how to recognize tactics, and avoid being potentially manipulated by these systems.

Ultimately, a lot more research is needed on how to protect people from the risks of persuasive technology, and it would be wise for the companies behind these tools to lead the charge, as firms such as OpenAI and Anthropic have begun doing with AI. Policymakers should also demand firms share findings with regulators and relevant stakeholders to build a global understanding of how these techniques could be exploited by adversaries. This information could then be used to set clear standards or targeted regulation.

The risk of technology so sophisticated that it allows Beijing to pull the strings to change what you believe or who you are may still seem like a far-off, sci-fi concern. But the stakes are too high for global policymakers to respond only after this has been unleashed. Now is the time for a global reckoning on how much personal information and influence we give tech companies over our lives.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com