The United States has introduced a bill aimed at tackling AI deepfakes and protecting original content from being used for AI training last week.
The bill, called Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT), has strong support across the US political spectrum.
Last Month, another senate bill dubbed "Take It Down Act '' was tabled, advocating the removal of AI deepfakes specific to non-consensual intimate imagery.
Notably, several controversies including Taylor Swift's AI generated deepfake nude images went viral on X (formerly Twitter), Facebook, and Instagram in January, sparking a nationwide debate on the ills of AI technology.
Apart from addressing deepfakes, COPIED Act will also address the concerns of content creators, journalists, artists and musicians that AI has been profiting off of their work, without acknowledgement or paying deserved compensation.
Last month, a report from Forbes accused Perplexity AI - an AI enabled search engine- of stealing its content. This was followed by an investigation by a New York based technology magazine called Wired that found Perplexity was summarising its articles despite the Robot Exclusion Protocol being in place, trespassing into areas of their website designated as off-limits to search bots.
How will the COPIED Act work?
The COPIED act will enable a mechanism whereby provisions will be made for a digital document called "content provenance information" that is akin to a logbook for all content- news articles, artistic expressions, images or videos- which will ensure authentication and detection of all AI generated content.
It also seeks provisions for making it illegal to tamper with this information, helping journalists and creative artists to safeguard their work from AI. Additionally, the bill would empower state officials to enforce the bill, creating an avenue to sue AI companies for removing the watermark or using the content without consent and compensation.
How are other countries regulating AI?
The European Union (EU) has one of the most comprehensive legislations in place for regulating AI. In contrast to the US approach, the EU Artificial Intelligence Act requires European states to take a proactive role and classify each AI system into four categories based on risk levels- Unacceptable Risk, High-Risk AI, Limited-Risk AI, and Minimal-Risk AI.
AI systems like those used in China for ascribing a social score to each citizen have been identified to be unacceptable risk levels and prohibited under the act.
In India, although a specific AI regulatory law has not been made yet, a Ministry of Electronics and Information Technology directive in March had mandated AI systems labelled as "under-tested" or "unreliable" to seek government approval before deploying. The directive was later overridden by another one, reversing the mandate, in a move that signalled caution to not affect innovation.
Featured Video Of The Day
"Avoid Travel": Advisory Issued For Indians In Bangladesh Amid Unrest
Microsoft Designer App, the AI-Powered Image Generator Platform Is Now Available For All Users Apple Claims OpenELM Does Not Power Apple Intelligence Following YouTube Video Data Controversy: Report Ukraine Rushes To Create AI-Enabled War Drones World's Largest Isolated Tribe Makes Rare Appearance In New Footage At Least 2 Passengers Die As Chandigarh-Dibrugarh Express Derails In UP Puja Khedkar's Mother Was Hiding In Lodge As "Indubai" Using Fake ID Britain Failed By Flawed Planning For Coronavirus Pandemic, Probe Finds Malavika Mohanan's Pastel Saree Perfectly Matched Her Charming Elegance Chartered Accountancy Final Exam Schedule For November Session Out Track Latest News Live on NDTV.com and get news updates from India and around the world.