Samuel Altman, the chief executive of ChatGPT's OpenAI
The chief executive of ChatGPT's OpenAI, Sam Altman, has said an entirely separate entity should be established to licence AI companies.
Here is your 10-point guide to this big story
- ChatGPT, a text-generating AI chatbot from OpenAI, was released in November of last year. Given brief text prompts, ChatGPT can write essays, code, and do a lot more, enhancing productivity. But it also has a darker side, which is worrying for a variety of reasons.
- ChatGPT has been used for many unethical purposes, including writing assignments, writing examinations for students, and the creation of phoney images, videos, and voices. There are also reports that hackers are using ChatGPT lures to spread malware on social media. These unusual uses have given the ChatGPT its rockstar-like success, which even shocked its creators at OpenAI.
- On the one hand, this AI technology has panicked educational institutions and made Big Tech envious. On the other hand, everyone-from attorneys to speechwriters, programmers to journalists-is waiting impatiently to experience the disruption brought on by ChatGPT.
- According to CNN, citing these harmful uses, which could be unlawful in many cases, OpenAI CEO Sam Altman urged lawmakers to regulate artificial intelligence during a Senate panel hearing Tuesday, describing the technology's current boom as a potential "printing press moment" but one that required safeguards.
- In his statements before a Senate Judiciary subcommittee, Mr. Altman stated, "OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks. We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models. If this technology goes wrong, it can go quite wrong."
- He insisted that in time, generative AI developed by OpenAI will "address some of humanity's biggest challenges, like climate change and curing cancer. " However, given concerns about disinformation, job security, and other hazards, "we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," he said.
- The lawmakers also stressed their deepest fears of AI's developments, with a leading senator opening the hearing on Capitol Hill with a computer-generated voice that sounded remarkably similar to his own reading a text written by the bot. "If you were listening from home, you might have thought that voice was mine and the words were from me, but in fact, that voice was not mine," said Senator Richard Blumenthal. Artificial intelligence technologies "are more than just research experiments. They are no longer fantasies of science fiction; they are real and present," said Blumenthal, a Democrat.
- Altman suggested the US government might consider a combination of licencing and testing requirements before the release of powerful AI models, with a power to revoke permits if rules were broken.
- He recommended labelling and increased global coordination in setting up rules over the technology, as well as the creation of a dedicated US agency to handle artificial intelligence. "I think the US should lead here and do things first, but to be effective, we do need something global," he added.
- Mr Altman said the potential for AI to be used to manipulate voters and target disinformation is among "my areas of greatest concern," especially because "we're going to face an election next year and these models are getting better."