This Article is From Jul 09, 2023

Paytm CEO Raises Concern After OpenAI's Blog On 'Human Extinction'. Read His Post

The blog post expressed the inability to have a solution for controlling a superintelligent AI and preventing it from going rogue.

Advertisement
Feature Edited by

Vijay Shekhar Sharma expressed concern after OpenAI's statement

Paytm CEO Vijay Shekhar Sharma has raised concerns after a blog post by ChatGPT creator OpenAI. The blog post expressed the inability to have a solution for controlling a superintelligent AI and preventing it from going rogue. It also said that superintelligent AI can lead to disempowerment and even possibly human extinction.

Vijay Shekhar Sharma tweeted, " Here is the OpenAI blog post done this week: In less than 7 years we have a system that may lead to the disempowerment of humanity and even human extinction. I am genuinely concerned with the power some set of people and select countries have accumulated - already."

The OpenAI blog post said: "Superintelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade."

See the post here:

Advertisement

Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. Humans will need better techniques than currently available to be able to control the superintelligent AI, hence the need for breakthroughs in so-called "alignment research," which focuses on ensuring AI remains beneficial to humans, according to the authors.

Advertisement

OpenAI, backed by Microsoft, is dedicating 20 per cent of the computing power it has secured over the next four years to solve this problem, they wrote. In addition, the company is forming a new team that will organise around this effort, called the Superalignment team.

The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assist human evaluation, and then finally train AI systems to actually do the alignment research.

Advertisement

AI safety advocates Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.

"You have to solve alignment before you build human-level intelligence, otherwise by default, you won't control it," he said in an interview. "I personally do not think this is a particularly good or safe plan."

Advertisement

The potential dangers of AI have been top of mind for both AI researchers and the general public. In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in developing systems more powerful than OpenAI's GPT-4, citing potential risks to society. A May Reuters/Ipsos poll found that more than two-thirds of Americans are concerned about the possible negative effects of AI and 61 percent believe it could threaten civilisation.

Featured Video Of The Day

Trainer Aircraft Crashes In Madhya Pradesh's Guna Injuring Two Pilots

Advertisement