Top AI scientists, researchers and others issued a new warning
Experts have warned about the potential dangers artificial intelligence poses to the very existence of civilization. On Tuesday, top AI scientists, researchers and others issued a new warning about the perils that artificial intelligence poses to humankind.
Hundreds of leading figures have signed the Statement on AI Risk which was posted on the Center for AI Safety's website.
"Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," reads a one-sentence statement. The open letter was signed by more than 350 executives, researchers and engineers working in A.I. including top executives from three of the leading AI companies: Sam Altman, chief executive of OpenAI, Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
The statement comes at a time when concerns about the potential harms of artificial intelligence are on the rise.
The Centre for AI Safety website suggests several disasters:
- AI could be used as a weapon: Malicious actors could repurpose AI to be highly destructive, presenting an existential risk in and of itself and increasing the probability of political destabilization
- AI-generated misinformation: A deluge of AI-generated misinformation and persuasive content could make society less equipped to handle the important challenges of our time.
- Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.
- Enfeeblement can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to self-govern and becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E.
Dan Hendrycks, the executive director of the Center for A.I. Safety, called the situation "reminiscent of atomic scientists issuing warnings about the very technologies they've created. As Robert Oppenheimer noted, 'We knew the world would not be the same,'" CNN reported.
"There are many 'important and urgent risks from AI,' not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization," Hendrycks continued. "These are all important risks that need to be addressed."