During the Taiwanese election, a Beijing-backed group was notably active, Microsoft said.
New Delhi: Microsoft has warned that China is gearing up to disrupt the upcoming elections in India, the United States and South Korea by using artificial intelligence-generated content. The warning comes after China conducted a trial run during Taiwan's presidential election, employing AI to influence the outcome.
Last month, Microsoft's co-founder Bill Gates met Prime Minister Narendra Modi In New Delhi and discussed the use of AI for social causes, women-led development and innovation in health and agriculture.
Across the world, at least 64 countries, in addition to the European Union, are expected to hold national elections. These countries collectively account for approximately 49 per cent of the global population.
According to Microsoft's threat intelligence team, Chinese state-backed cyber groups, along with involvement from North Korea, are expected to target several elections scheduled for 2024. Microsoft said that China will likely deploy AI-generated content via social media to sway public opinion in favour of their interests during these elections.
"With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests," Microsoft said in its statement.
Threat Of AI In Elections
The threat posed by political advertisements employing AI technology to produce deceptive and false content, including "deepfakes" or fabricating events that never took place, is significant in a crucial poll year. Such tactics aim to mislead the public regarding candidates' statements, stances on various issues, and even the authenticity of certain events. If allowed to go unchecked, these manipulative attempts have the potential to undermine voters' ability to make well-informed decisions.
While the immediate impact of AI-generated content remains relatively low, Microsoft warned that China's increasing experimentation with this technology could potentially become more effective over time. The tech giant noted that China's previous attempt to influence Taiwan's election involved the dissemination of AI-generated disinformation, marking the first instance of a state-backed entity utilising such tactics in a foreign election.
During the Taiwanese election, a Beijing-backed group known as Storm 1376, or Spamouflage, was notably active, Microsoft said. This group circulated AI-generated content, including fake audio endorsements and memes, aimed at discrediting certain candidates and influencing voter perceptions. The use of AI-generated TV news anchors, is a tactic also employed by Iran.
"Storm-1376 has promoted a series of AI-generated memes of Taiwan's then-Democratic Progressive Party (DPP) presidential candidate William Lai, and other Taiwanese officials as well as Chinese dissidents around the world. These have included an increasing use of AI-generated TV news anchors that Storm-1376 has deployed since at least February 2023," Microsoft said.
AI Influence In US Affairs
Microsoft pointed out that Chinese groups continue to conduct influence campaigns in the United States, leveraging social media platforms to pose divisive questions and gather intelligence on key voting demographics.
"There has been an increased use of Chinese AI-generated content in recent months, attempting to influence and sow division in the US and elsewhere on a range of topics including the train derailment in Kentucky in November 2023, the Maui wildfires in August 2023, the disposal of Japanese nuclear wastewater, drug use in the US as well as immigration policies and racial tensions in the country. There is little evidence these efforts have been successful in swaying opinion," Microsoft stated.
The use of AI in US election campaigns is not new. In the lead-up to the 2024 New Hampshire Democratic primaries, an AI-generated phone call mimicked President Joe Biden's voice, advising voters against participating in the polling.
The call falsely insinuated that voters should withhold their votes for the general election in November instead. Upon hearing this message, the average voter could easily have been misled into believing that President Biden himself had endorsed this directive, potentially leading to their disenfranchisement.
Although there is no evidence of Chinese involvement in the New Hampshire episode, the incident marks one of many such instances where AI posed a direct threat to democratic practices.
Road Ahead For India
India's general elections are scheduled to begin on April 19, with the results set to be declared on June 4. The electoral process will unfold across seven phases, with the first phase commencing on April 19, followed by the second phase on April 26, the third phase on May 7, the fourth phase on May 13, the fifth phase on May 20, the sixth phase on May 25, and culminating with the seventh phase on June 1.
The current term of the 17th Lok Sabha Assembly is set to conclude on June 16.
The Election Commission of India (ECI) has already provided guidelines and protocols for promptly identifying and responding to false information and misinformation.
Last month, representatives from OpenAI, the developer of ChatGPT, met with members of the ICI and delivered a presentation to the commission members outlining the measures being undertaken to prevent the misuse of AI in the upcoming elections.