AI Snake Oil: Exposing The Truth Behind Overhyped Claims

Is artificial intelligence really as powerful as we are being led to believe, or is much of it just smoke and mirrors?

Advertisement
Read Time: 9 mins
Arvind Narayanan (left) and Sayash Kapoor (right)

In a world where the discourse around AI often swings between wild optimism and existential fear, Sayash Kapoor and Arvind Narayanan have emerged as two of the clearest voices cutting through the noise. Recently named among Time magazine's Top 100 most influential figures in AI, the duo is on a mission to debunk the hyperbole surrounding the technology.

The world is experiencing a technological gold rush, with AI as its latest shiny promise. From predictive policing to job automation, we are being sold visions of a future where AI knows us better than we know ourselves. But beneath this alluring facade lies a troubling reality: much of what's marketed as cutting-edge AI today is, in fact, snake oil.

Their journey began in 2019, when Arvind, a professor of computer science at Princeton University, gave a talk titled How to Recognize AI Snake Oil. The presentation went viral, with the slides downloaded tens of thousands of times and his tweets garnering millions of views.

Recognizing a growing appetite for critical perspectives on AI, Arvind teamed up with Sayash, one of his Ph.D. students, to co-author a book published recently in 2024. Their Substack, AI Snake Oil, has since become a hub for commentary on AI's latest developments and the growing concern over its misuse.

In a recent conversation with Sayash, we delved into the core of their scepticism, their reflections on AI's genuine limitations, and why, despite all the noise, the truth about AI lies somewhere between the extremes.

I sat down with Sayash Kapoor, co-author of AI Snake Oil, to uncover myths and hard truths about AI's actual capabilities and limitations.

Advertisementp

The Origin of Scepticism

It all started with a phone call. In 2019, Sayash's co-author, Arvind Narayanan, received a confidential call from a whistleblower at a company that sold AI-driven job hiring automation. The company boasted they could predict a candidate's future job performance based solely on a 30-second video interview. "The employee said the company was selling snake oil," Sayash recalled. "He told Arvind that the company's tools didn't work and probably never could."

This company claimed they could identify which candidates would perform well at their jobs based solely on a 30-second video interview.

Advertisement

This set off alarm bells and led Arvind to dive deeper into the world of AI promises that didn't deliver. His findings became the basis of a viral talk at MIT, where he laid bare the reality: "A lot of what's sold under the banner of AI doesn't actually work," Sayash explained.

When AI Fails: The Civil War Prediction Case

Their investigation into AI's shortcomings didn't stop at job automation. When Sayash joined Princeton, they started looking at another bold claim: AI could predict civil wars. Political scientists declared that AI models were 99% accurate in predicting which countries would experience conflict in the coming years.

Advertisement

"If that were true, it would be revolutionary," I remarked. Sayash's response was a sobering reminder of how easily we are misled by statistics. "When we looked into the studies, we found errors in every single one," he revealed. "Once we corrected the errors, AI performed no better than 20-year-old methods that political scientists were already using."

The crux of the error? "It was a classic case of 'teaching to the test.' These models were trained on the same data they were evaluated on," Sayash said. While the models performed well on historical data, they were useless at predicting future events.

When applied to new examples - like a different country's GDP - they fail to predict future outcomes, like whether a civil war would occur.

Advertisement

"More artificial than intelligent," he added with a wry smile.

The Snake Oil Formula

At the heart of AI snake oil lies predictive AI, a category of technology that claims to foresee human behaviour-whether it's hiring decisions, crime, or health outcomes. "Predicting the future is hard," Sayash emphasized. "And yet, when it comes to AI, we seem to have abandoned common sense."

What makes predictive AI particularly dangerous is its veneer of scientific legitimacy. We see the term "AI" and assume it's objective, advanced, and accurate. But as Sayash explained, "We've seen over and over again that these predictive tools simply don't work, especially when it comes to social outcomes." From job interviews to bail decisions, predictive AI continues to make consequential decisions despite its glaring flaws.

Generative AI: Hope or Hype?

On the other end of the spectrum is generative AI, like ChatGPT or DALL-E, which creates text, images, or code. While generative AI has made significant strides, Sayash warns against its overhyped promises and exaggerated risks. "Yes, tools like ChatGPT have made rapid progress, but claims that AI will automate all jobs in the next five to ten years are highly unlikely," he said.

Generative AI also comes with fears, most notably the looming existential threat. "People worry AI will kill us all," Sayash laughed. "But we don't really have evidence to support that kind of thinking. AI isn't more of an existential risk than Microsoft Excel."

Instead of panicking over science fiction scenarios, Sayash and his co-author argue for a more grounded understanding of AI's current applications-and its genuine risks. "The risks are real but much more practical," he explained. "For instance, generative AI is much easier to misuse than to use well. We're already seeing it generate fake news and propaganda, but finding legitimate, productive use cases is much harder."

What Is an AI Company?

As AI becomes the buzzword for companies looking to attract investors and customers, the line between real AI and glorified analytics has blurred. In India, as companies file for IPOs and tout themselves as AI-driven, Sayash is skeptical. "Even after eight decades of research, we still don't have a consensus on what AI actually means," he said.

In AI Snake Oil, Sayash and Arvind propose a simple, three-part definition of AI:

It performs tasks that would require creative effort or training for humans.

It learns from past data.

It generalizes beyond the data it was trained on.

But this broad definition has allowed "everyone under the sun" to market themselves as AI companies. "We've seen firms claiming to use AI when, in reality, they're employing humans behind the scenes," Sayash said. "A calendar scheduling company, for example, claimed to have an AI personal assistant, but it was just people answering emails and scheduling appointments."

India's AI Future: Diffusion Over Innovation

With India poised to play a significant role in AI development, Sayash believes the focus should be on diffusion-the spread of AI technologies across different sectors-rather than solely on innovation. "In discussions about AI, the focus is often on who will build the best language model or who has the most GPUs to train models," he said. "But the real driver of economic transformation is diffusion-how AI gets adopted in industries like healthcare, finance, and education."

For India, this means training skilled workers who can apply AI to these industries rather than competing directly with the tech giants of the U.S. and China. "The question shouldn't be about who builds the best model, but how we can deploy AI to benefit different sectors of the economy," Sayash emphasized.

Finding Product-Market Fit with AI

As the conversation wound down, I asked Sayash for his advice to AI developers, researchers, and entrepreneurs looking to build real-world applications.

"Finding product-market fit with AI is extremely hard but crucial." He pointed out that while it's easy to build a demo, building a reliable product that people actually use is far more difficult. "The real value comes from creating guardrails around AI models to ensure they don't hallucinate or make mistakes."

Sayash's message was clear: just because AI is general-purpose doesn't mean there's no effort involved in making it work in the real world. "It takes time, effort, and a deep understanding of the specific domain," he said. "That's where the future of AI lies-not in grand predictions, but in the real, grounded work of building something that actually works."

Cutting Through the Noise of AI Hype

It's unfortunate, but there's a persistent tendency toward polarization in AI, as in many tech domains. "On one side, you have people who think AI will cure all diseases," Sayash pointed out, "and on the other, those who believe AI will end humanity." These extreme positions have created an atmosphere of overhyped promises and exaggerated fears, creating more confusion than clarity. But as Sayash emphasized, it's crucial to steer the conversation back to reality.

He and Arvind decided early on to ground their work in evidence. "We were inspired by the concept of a scientific event horizon," Sayash explained, referencing the boundary beyond which we can't make reliable predictions. AI is at a similar horizon today. "It's like Marvin Minsky claiming in the 1960s that solving computer vision would take three undergrads one summer," Sayash added with a knowing smile. "Here we are, five decades later, and we're still nowhere near solving it."

So, what's the biggest misconception about AI that the public needs to rethink? Sayash boiled it down to two fundamental points: "First, AI is not a single technology. Sure, technologies like ChatGPT have made rapid progress, but AI predicting future social outcomes-like crime rates or job performance-simply doesn't work. And second, AI won't kill us all. The existential risks people are worried about aren't backed by evidence."

The gap between what AI can actually do and what we imagine it might do is vast-and Sayash and Arvind aim to close this gap with their book, AI Snake Oil.

Ultimately, AI isn't the all-powerful tool some believe, nor is it a looming existential threat. It is true impact lies somewhere in between-requiring an evidence-based approach and, perhaps most importantly, a grounded sense of reality. If we can focus on that, Sayash believes, we can begin to move past the hype and see AI for what it truly is: a tool with limitations but immense potential when applied thoughtfully.

Featured Video Of The Day
Watch: Rajasthan Candidate Who Slapped Official Arrested Amid High Drama
Topics mentioned in this article