This Article is From Mar 05, 2024

Opinion: Gemini Row- AI Can't Serve India's Needs Without Understanding It First

Latest and Breaking News on NDTV

India finds itself at a crossroads in a world increasingly guided by AI. The recent flubs of Google's Gemini AI - be it in misrepresenting historical facts or navigating cultural sensitivity - are a wake-up call. In its current form, AI is a mirror reflecting our fragmented, diverse, and often contradictory world. But when this mirror distorts more than it reveals, it's time to pause and ponder.

Government's Reaction And The Need for Nuance

When it comes to handling the slip-ups of AI, governments often react like the very algorithms they seek to regulate - quickly and without the depth required. Such an approach misses the mark in addressing the complexities of AI's role in society.

The recent uproar over Gemini's blunders serves as a case in point. The tool's missteps in presenting historical facts reflected deeper, systemic issues with how AI interacts with our cultural and historical narratives. The government response to such incidents tends to be rapid and decisive, but it often lacks the subtlety and informed perspective that such an issue demands.

Regulating AI is a continuous struggle for governments worldwide. Various US states have enacted laws against deepfakes, focusing on the end products rather than the broader ethical issues in AI development. In the European Union, the proposed Artificial Intelligence Act categorises AI systems into high-risk and low-risk categories, a move criticised by some for oversimplifying AI's nuanced impact. China's laws on deep synthesis technology, including deepfakes and voice cloning, also mandate user consent and clearly marking synthetic content. 

These examples underscore a pattern of immediate regulatory responses that may not fully engage with the deeper aspects of AI's integration into society. The Gemini blunder underlines that these issues are more than just mere bugs in a system. They highlight a fundamental challenge in the AI world. 

A Human Touch In The Age Of Algorithms

The recent directives from India's Ministry of Electronics and Information Technology (MeitY) on AI regulation, fraught with uncertainties, highlight an essential aspect of governance in the digital age. The government's approach to AI, particularly in these nascent stages, shouldn't be black-and-white, devoid of the subtleties that characterise human judgment.

Though well-intentioned in its aim to regulate AI and deepfakes, the advisory leaves several questions unanswered. This ambiguity, especially around its impact on startups vis a vis larger platforms, indicates a one-size-fits-all approach, reminiscent of the rigid logic of algorithms. 

The Practicalities of AI Regulation

The government's decision to mandate approval for AI models introduces a complex layer in the tech ecosystem. It's an operational challenge. A few critical questions emerge: who in the government will be responsible for reviewing and approving these AI models? What criteria will be used to judge the reliability and safety of these technologies?

The advisory implies that there will be a rigorous vetting process, but the details of its execution remain unclear. Will a dedicated committee or task force within MeitY be set up for this purpose? And, importantly, how will they ensure that this process is swift enough to keep up with the fast-paced nature of AI development?

Another concern is how transparent this approval process will be. In the tech world, where innovation moves at lightning speed, any delay in approvals could mean lost opportunities. Startups and smaller companies might be at a disadvantage if this process isn't efficient and transparent.

The MVP Excuse: Not Good Enough

The recent trend of releasing minimally viable products (MVPs) in AI is risky, especially in a market as complex and diverse as India. MVPs hold merit in terms of rapid development cycles, allowing companies to quickly test ideas and gather user feedback. However, this approach can backfire in sensitive fields such as AI, where the stakes involve user preferences, cultural sensitivities, and ethical considerations.

With India's rich cultures, languages, and traditions, an AI tool released as an MVP can do more harm than good. The problem lies in the positioning of an AI tool as 'just an experiment'. When these tools interact with real people in real time, their impact is immediate and tangible. 

The MVP approach in AI overlooks the need for thorough vetting of the data these tools are trained on and the contexts in which they are designed to operate. 

(Not) Learning From Experience

Google and Microsoft aren't learning from their past adventures either. In 2016, Microsoft launched Tay, an AI chatbot on Twitter, which aimed to learn from user interactions. However, Tay quickly started mirroring offensive and racist language, influenced by some specific interactions. This incident showcased the risks of AI being engaging with diverse online behaviours without proper safeguards.

The tech world is yet to fully learn from Tay's downfall years later. It's time to question the rationale behind releasing MVPs in AI, considering the broader implications of these tools in diverse societies. The goal should be to develop AI tools that are not just technologically advanced but are also culturally informed and ethically sound right from their initial release.

The Data Question: Garbage In, Garbage Out

Gemini's errors aren't just random mistakes. They're reflections of the deeper biases and inaccuracies ingrained in the data it's been trained on. This is a critical point to understand: AI systems, at their core, are learning machines. They absorb, process, and regurgitate the information fed to them. If this information is skewed, incomplete, or biased, the AI's understanding and output will be too.

It's like teaching a child. If a child is only exposed to a narrow view of the world, their understanding and perceptions will be limited by that narrowness. The same goes for AI systems like Gemini. 

This brings us to a crucial point: the challenge with AI isn't just technical, it's also about the data it learns from. It's about ensuring this data is as diverse, accurate, and comprehensive as possible. The Gemini incident isn't just a wake-up call to programmers and developers; it's a call to all of us to think about the kind of information we're feeding into these potentially revolutionary systems.

In the age of AI, the old adage, "garbage in, garbage out", holds more truth than ever: an AI is only as effective as the data it's trained on. For India, this points to a significant task- creating and curating open, diverse, and accurate data repositories. It's not just about gathering data; it's about collecting the right kind of data. We must ensure that the information feeding into these AI systems reflects our vast cultural and social diversity. This is not just a technical task but a cultural one.

How To Seek Rich Data

Hackathons and collaborative open-source projects could prove instrumental. These platforms encourage the sharing and development of rich datasets and provide opportunities for bright minds across the country to contribute, ensuring that the data feeding into our AI systems is rich in quantity, quality, and perspective.

This can pave the way for AI systems that are technically proficient, culturally aware, and socially sensitive. After all, for AI to truly serve India's needs, it must understand India in all its complexity.

Going Beyond The Surface

As we look at the future of AI in India, we're left with more questions than answers. MeitY's latest advisory is a step forward, but how it will play out is still uncertain. Are we creating rules that truly understand the depth of AI's impact on society, or are we just skimming the surface? 

How will we ensure that AI, in its rapid growth, doesn't stray from the values and ethics integral to our society? We're at a point where we need to think about AI beyond technology - as a part of our social fabric that requires careful handling. Will India be able to lead the way in showing how technology can align with human needs and values?

(Pankaj Mishra has been a journalist for over two decades and is the co-founder of FactorDaily.)

Disclaimer: These are the personal opinions of the author.

.