Opinion: The Fragile Trust We Place In AI Safety

Advertisement

In 1984, the quiet city of Bhopal, India, suffered a tragedy that the world would never forget. A lethal gas leak from the Union Carbide pesticide plant killed thousands within days and left many more injured or permanently disabled. The Bhopal disaster wasn't just a sudden mishap; it was the culmination of ignored warnings and neglected safety measures—a catastrophe that could have been prevented if accountability had been prioritised over profit.

Today, as artificial intelligence rapidly integrates into every aspect of our lives, I can't help but feel a familiar sense of unease. AI might seem worlds apart from a pesticide plant, but its risks are mounting before our eyes. From deciding who gets a job or a loan to influencing public opinion during elections, AI has immense power to shape human lives. The pressing question is: will we act on the warning signs, or are we heading towards another preventable crisis where ordinary people bear the cost of corporate choices?

Just A Corporate Buzzword?

“Responsible AI” has become the tech industry's latest catchphrase. Much like “eco-friendly” or “corporate social responsibility”, it's a term companies use to assure us they're committed to ethical practices. But what does “Responsible AI” really mean, and does it amount to genuine accountability?

In many cases, “Responsible AI” ends up being more of a marketing slogan than a commitment to action. Companies publish ethical guidelines, but without mechanisms to enforce them, these guidelines often remain just words on paper. Similar to how “greenwashing” can mask the true environmental impact of corporate activities, “Responsible AI” allows companies to self-regulate while projecting an image of ethical responsibility. This reliance on voluntary standards sidesteps real accountability and leaves the public vulnerable to unchecked risks.

Two Views 

In the debate over AI safety, two main viewpoints have emerged. The first sees AI as just another technology that should evolve naturally, with minimal regulation to encourage innovation. Advocates of this view believe that companies, guided by “Responsible AI” principles, will self-regulate effectively. They focus on addressing immediate issues like bias and data privacy but resist strict regulatory oversight.

Advertisement

The second viewpoint warns that AI could eventually pose a fundamental threat to humanity. As AI systems become more advanced, they could operate autonomously and make decisions beyond our control. This group urges for strict, preemptive regulations, akin to those governing nuclear energy or biological weapons. They caution that without proper safeguards, AI might act on unintended objectives or evolve in ways that could harm us on a massive scale.

As someone deeply interested in AI but cautious about its unchecked growth, I see merit in both perspectives. AI holds incredible promise for advancing society, but history shows that leaving corporations to regulate themselves often leads to disaster. We need a balanced approach that ensures AI develops safely, guided by thoughtful oversight rather than reactive measures.

Why AI Risks Escalate Quickly

AI isn't like traditional technologies because it can learn and make decisions on its own. Unlike a machine that only does what it's programmed to do, AI systems can adapt and change their behaviour based on new data. This ability to self-learn means that small issues can quickly become big problems if not properly managed.

Advertisement

Without proper oversight, AI can amplify existing biases, ignore ethical considerations, or prioritise efficiency over safety. For example, if an AI system learns from biased data, it might make unfair decisions that discriminate against certain groups of people. In sensitive areas like finance, healthcare, or law enforcement, such outcomes can have serious, far-reaching consequences.

The Story Of Williams, Oliver And Parks

These risks aren't just theoretical—they're happening right now. Consider the cases of Robert Williams, Michael Oliver, and Nijeer Parks in the United States. All three men were wrongfully arrested due to misidentification by facial recognition technology, causing significant emotional and economic distress. Nijeer Parks spent 10 days in jail, while Michael Oliver lost his job. Despite the charges eventually being dropped, the arrests had lasting impacts on their relationships and personal lives. 

These incidents highlight how flawed AI systems can have devastating personal impacts. Facial recognition technology has been shown to be less accurate in identifying people with darker skin tones, leading to wrongful arrests and unjust targeting of minorities. When technology intended to enhance security ends up violating individual rights, it underscores the urgent need for accountability.

Advertisement

In the financial sector, AI algorithms used to determine creditworthiness have been found to replicate racial and gender biases. This means qualified applicants might be denied loans simply because the AI has learned to associate certain demographic factors with risk.

OpenAI's Whisper, a tool designed to transcribe speech, sometimes “hallucinates” by inventing words or phrases that were never spoken. In critical fields like medicine or law, such errors could lead to serious misunderstandings or wrongful actions.

Advertisement

These examples underscore a vital question: who is responsible when an AI system causes harm? When algorithms make decisions that affect real lives, accountability cannot be an afterthought.

Citizens First

In India, AI is increasingly being adopted across various sectors, from agriculture to AI-driven education platforms. While these technologies offer significant benefits, they also raise concerns about privacy, data security, and potential misuse.

For instance, the use of AI in predictive policing has sparked debates about surveillance and civil liberties. In education, AI tools aimed at personalising learning have faced criticism for reinforcing existing inequalities due to uneven access to technology.

Advertisement

These local examples illustrate that AI's impact is not just a global issue but one that affects communities and individuals across India. Addressing these challenges requires not only technological solutions but also thoughtful policies that protect citizens' rights.

Moving Beyond Buzzwords

If “Responsible AI” is to be more than a buzzword, we need concrete actions:

  1. Built-in Safety Mechanisms: AI systems should be designed with safety features that prevent unethical or autonomous actions without human oversight. Just as cars come with brakes and seat belts, AI should have built-in safeguards.
  2. Regular Monitoring and Audits: Continuous monitoring is essential, especially for AI used in high-stakes situations. Independent audits can identify issues before they escalate, assessing both technical performance and ethical compliance. 
  3. Tiered Regulation: Not all AI applications carry the same level of risk. We should have a tiered regulatory framework where AI used in critical sectors like healthcare or finance undergoes rigorous testing and certification. 
  4. Independent Oversight and Enforcement: Relying on companies to police themselves is insufficient. Independent regulatory bodies with the authority to enforce standards are crucial. 
  5. Transparency and Explainability Requirements: One of the biggest challenges with AI is its “black box” nature, where even experts can't always explain how it reaches certain decisions. Demanding transparency and explainability ensures that AI systems can be scrutinised and held accountable. If an AI system denies someone a loan or recommends a medical treatment, we should be able to understand why.

Human Rights In The Age of AI

AI safety isn't just about preventing technical errors; it's about safeguarding fundamental human rights like privacy, fairness, and equality. Algorithms can entrench discrimination, often without anyone realising it until harm has been done.

For example, if an AI hiring tool is biased against women, it can systematically exclude qualified candidates, perpetuating gender inequality in the workplace. Protecting human rights means designing AI systems that respect these rights from the ground up—a concept known as “ethical by design”.

A World Economic Forum white paper emphasises aligning AI with values like justice and privacy, but these principles need to be enforced, not just espoused. It's not enough to aim for ethical AI; we must ensure that ethical considerations are embedded in every stage of AI development and deployment.

Accountability Is Shared 

Ensuring AI accountability isn't just the job of governments or corporations; it's a collective effort. International cooperation can help establish common standards, much like global agreements on environmental protection or nuclear non-proliferation. Moreover, public awareness and engagement are crucial. By staying informed and voicing concerns, individuals can influence policies and practices.

The tragedy of Bhopal teaches us that neglecting accountability can lead to catastrophic outcomes. It wasn't a lack of technology that caused the disaster; it was a failure to prioritise human lives over corporate interests. With AI, we're at a similar crossroads.

[Jibu Elias is an AI ethicist, activist, researcher, and currently the Country Lead (India) for the Responsible Computing Challenge at Mozilla Foundation]

Disclaimer: These are the personal opinions of the authors

Featured Video Of The Day

Vikrant Massey's Take On English Vs Hindi Journalism: "It Is Pure Classism"

Advertisement