Remember when OpenAI's nonprofit board unceremoniously fired Sam Altman? It was a four-day spell in the wilderness for the chief executive officer, sparked by the claim he hadn't been "consistently candid" with the directors. A year later and Altman isn't being very consistent about the future of artificial intelligence. In an interview with Bloomberg Businessweek published on Monday, Altman admitted that he'd once conjured a "totally random" date for when OpenAI would build artificial general intelligence (AGI), a theoretical threshold when AI surpasses human intelligence. It would be 2025, a decade out from the company's founding.
Altman's candour about that mistake was momentarily refreshing until he breezily made another prediction in the same interview: "I think AGI will probably get developed during this president's term," he said. He made a bigger claim in a personal blog post on Monday that we would see AI "agents" join the workforce this year that "materially change the output of companies."
Altman has become a master of modulating between humility and hype. He'll admit to his past guesswork while making equally speculative new predictions about the future, a confusing cocktail that deflects attention from thornier current issues. Take all his pronouncements with a large pinch of salt.
Tech company leaders have long tried to sell us a mirage of the future. Elon Musk claimed he'd put self-driving taxis on the road by 2020 and Steve Jobs was famously mocked for his reality distortion field. But Altman's strategic ambiguity is more sophisticated because he mixes his claims with apparent forthrightness, tweeting Monday for instance that OpenAI was losing money because its premium service was too popular, or admitting to his previous guesswork on AGI. That can make other predictions and claims sound more credible.
The stakes are also different than for Musk, who sells cars and rockets, and Jobs, who sold consumer products. Altman is marketing software that could transform education and employment for millions of people, in much the same way the internet itself changed just about everything, and his predictions can help steer the decisions of businesses and governments that fear being left behind.
One risk, for instance, is a potential weakening of regulation. While AI safety institutes popped up in several countries in 2024, including the US, the UK, Japan, Canada and Singapore, there's a chance that global oversight will pull back this year. Policy research firm Eurasia Group, founded by American political scientist Ian Bremmer, cites a loosening of AI regulation as one of its top risks for 2025. Bremmer points out President-elect Donald Trump is likely to rescind President Joe Biden's executive order on AI and that the international AI Safety Summit series, instigated by the UK, will be renamed "AI Action Summit" when it's held this year in Paris (where promising startups like Mistral AI also happen to be based).
In one way, Altman's comments about AGI's imminent arrival help justify this pivot to "action" from "safety" in those summits, because meaningful oversight looks more challenging to set up when things are moving so quickly. The message becomes: "This is happening so fast, traditional regulatory frameworks won't work." And Altman has been inconsistent in how he talks about AI safety too. In his Monday blog post he talked up its importance, but in an interview with New York Times journalist Andrew Ross Sorkin at the Dealbook Summit in December, he downplayed it, saying: "A lot of the safety concerns that we and others expressed actually don't come at the AGI moment. It's like, AGI can get built, the world goes on mostly the same way. The economy moves faster, things grow faster."
That's a persuasive narrative for political leaders already inclined toward light-touch regulation, such as Trump, to whom Altman is providing a $1 million inaugural fund donation. The problem is that the promises of a bright future serve as a constant distraction from near-term issues, like the looming disruption AI poses to labor, education and the creative arts, and the bias and security issues generative AI still suffers from.
When Altman was asked by Bloomberg about the energy consumption of AI, he immediately brought up an untested new technology as the answer. "Fusion is gonna work," he replied, referring to the still-theoretical process of deriving power at scale from nuclear fusion. "Soon," he added. "Well, soon there will be a demonstration of net-gain fusion." As it happens, fusion has been the subject of overly optimistic projections for decades, and in this case, and Altman was once again using it as means to deflect an issue that threatens to rein in his ambitions.
Altman seems to be operating a more sophisticated, iterative version of the Silicon Valley hype machine. That matters because he isn't just selling a service but shaping how businesses and policymakers view AI at a critical moment, especially about regulation. AGI will arrive during Trump's presidency, according to him, but the world will go on. No need for too many checks and balances. That is far from the truth.
(Parmy Olson is the Bloomberg Opinion Columnist who writes on technology and AI)
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Track Latest News Live on NDTV.com and get news updates from India and around the world