Advertisement

Opinion | Let 'Thermodynamics' Teach You Something About AI

Arindam Goswami
  • Opinion,
  • Updated:
    Mar 05, 2025 19:04 pm IST
    • Published On Mar 05, 2025 19:01 pm IST
    • Last Updated On Mar 05, 2025 19:04 pm IST
Opinion | Let 'Thermodynamics' Teach You Something About AI

As the Paris AI Action Summit 2025 descended into a cacophony of competing national interests and corporate agendas, it offered a perfect illustration of entropy in action: a system of global governance dissolving into disorder without sufficient organising energy to maintain coherence. This collapse of international cooperation on artificial intelligence (AI) standards reveals a fundamental truth applicable far beyond the diplomatic sphere: without intentional intervention, our AI ecosystems will naturally trend toward maximum chaos.

In thermodynamics, entropy represents an inexorable march toward disorder. Left to its own devices, an isolated system inevitably slides toward maximum disorder: its energy dispersing, its useful work diminishing, its structure dissolving into chaos. This fundamental principle may offer us a powerful lens through which to view our emerging AI governance challenges.

Entropy In Action

Consider our current AI ecosystem as a system subject to entropy. Without external intervention, without the application of energy, structure, and order, it naturally tends toward a state of maximum disorder. We're witnessing this entropy in real-time: unregulated data harvesting, model releases without safety audits, widening capability gaps between open and closed systems, proliferating synthetic content without provenance, and market consolidation that threatens to concentrate AI power in fewer hands with each passing quarter.

The "spontaneous evolution" of AI systems is already revealing concerning patterns. Large models released without adequate safeguards quickly find themselves exploited for harassment or misinformation. Effective techniques for jailbreaking guardrails spread rapidly across internet forums. Competitive pressures push companies to release capabilities before they can be properly secured. Each represents entropy in action: the natural tendency toward disorder playing out in predictable ways.

The Gemini Controversy

Take the controversy surrounding Google's Gemini Advanced, which generated historically inaccurate images, including depictions of Black Nazi soldiers and Asian Vikings when prompted for historical figures. This wasn't malicious design but entropy at work: the natural tendency towards disorder as complex systems interact with human inputs in unpredictable ways. Google's subsequent restriction of the model's image generation capabilities represented energy expended to restore order, but only after public trust had eroded. The public backlash demanded significant corporate energy to address, resulting in model retraining and capability restrictions. Had proper governance frameworks been in place beforehand—had ordered energy been applied proactively rather than reactively—his entropy spike might have been avoided.

Similarly, look at the recent controversy over OpenAI's temporary removal of election-related content policies, followed by their rapid reinstatement later after public backlash. The brief policy vacuum created a disorder spike that required significant organisational energy to correct. Had proper governance frameworks been consistently maintained, this entropy surge might have been avoided altogether.

DeepSeek Disruption

We saw entropy principles at work again when DeepSeek released their powerful open-source models in January 2025, challenging the market dominance of closed AI systems. While democratising access, the release unleashed new potential for misuse without corresponding safety mechanisms, sending ripples of disorder through the AI ecosystem. The AI community has since scrambled to develop post-hoc safeguards: a classic example of reactive rather than proactive entropy management.

Just as we cannot defy thermodynamics, we cannot expect the AI ecosystem to self-organise into a state that maximises human welfare. The second law of policy, if you will, suggests that beneficial order in complex systems requires intentional energy input.

Course-Correction Should Be Intentional

Consider the European Union's AI Act, finally implemented in its first phase in early 2025. This was an attempt at ordering massive amounts of energy into the system to prevent chaos and disorder, classifying AI applications by risk level, mandating transparency for high-risk systems, and requiring human oversight. There were industry complaints about compliance costs and other problems. However, this framework constitutes precisely the kind of structured intervention needed to counteract technological entropy.

China's approach offers another example of entropy management, for instance, through its real-name verification requirements for generative AI users implemented in late 2024 and its mandatory content filtering systems. There are legitimate concerns about surveillance and privacy invasion that these measures seek to assuage, ultimately aimed at curbing the disorder within the ecosystem.

In the United States, the Biden administration's executive orders on AI safety were largely rescinded by President Trump in February 2025. The requirements for companies to share safety test results with the government before model releases were removed. The regulations were deemed to be too onerous by some stakeholders. This regulatory rollback, however, effectively removed an entropy-fighting mechanism from the system. The market's response has been telling—more models released more quickly, but with less coordination on safety standards, creating exactly the kind of disorder that governance is meant to prevent. The recently concluded Paris AI Action Summit also portends a recalibration of global AI governance policies towards less onerous regulations, which could generate more chaos than would be tolerable.

Let Order Prevail

The most promising entropy-fighting initiatives may be coming from multi-stakeholder coalitions. The Frontier Model Forum, involving OpenAI, Anthropic, Google, and Microsoft, announced expanded safety collaboration protocols, creating shared standards for model evaluation and risk assessment. This is akin to voluntarily imposing order on themselves. But the important point is that these companies acknowledge that even competitive markets benefit from certain entropy-reducing guardrails. The invisible hand of the market as espoused by free-market economists works through chaos, but it is often benefited by rules and regulations enforcing some governance principles that create a level-playing field for all. They create boundaries, establish responsibilities, require documentation, and attempt to align market incentives with broader societal values.

Yet not all energy inputs are equal. Poorly designed regulations can create their own forms of disorder: bureaucratic inefficiencies, innovation bottlenecks, or perverse incentives. The art of governance lies in applying the right kind of ordering energy at the right points in the system.

The Road Ahead

What might an entropy-aware approach to AI governance look like going forward? This is especially important for India too, as India will be hosting the next AI Summit, and as a technological powerhouse amongst the developing countries, it should lead the efforts to shape global AI governance.

First, it would recognise that maintaining beneficial order requires continuous energy input. One-time regulations will inevitably degrade as technology evolves. This is because different entropy sources will evolve over time, which will infuse more chaos and disorder into the system. Instead, we need governance mechanisms that can adapt and respond to changing dynamics. The EU has instituted an approach of tiered implementation with regular review periods that acknowledges this reality by creating feedback loops.

Second, it would focus on establishing boundary conditions rather than micromanaging internal processes. Just as living organisms maintain low internal entropy by exchanging energy with their environment across carefully regulated boundaries, AI governance should focus on system-level parameters while allowing flexibility within those boundaries. The Global Partnership on AI's "Minimum Viable Governance" framework is one approach that follows this maxim of allowing flexibility within core standards, establishing core requirements for safety, transparency, and accountability, while allowing flexibility in implementation.

Distribute Responsibility

Third, it would distribute responsibility for order maintenance across the system. For instance, Stanford's newly launched AI governance institute, bringing together technologists, ethicists, and policymakers, represents a promising model of shared entropy management, recognising that the energy required to maintain order must come from multiple sources. It shouldn't come solely from government mandates or corporate policies, but from multiple stakeholders contributing to a shared governance ecosystem.

Finally, it would prioritise transparency. In thermodynamic systems, entropy is fundamentally about information, about the number of possible states a system might occupy. In AI governance, information asymmetries represent a form of entropy. This hampers effective oversight. Explainability and accountability of AI systems could be improved by requiring some amount of transparency around training data, model architectures, and system limitations. This can reduce the disorder inherent in information gaps. The UK's Competition and Markets Authority recent investigation into AI partnership agreements seeks to address exactly this issue, demanding greater transparency around data sharing and model development to reduce information asymmetries that create market disorder.

The partnership announcements between major AI labs like OpenAI, Anthropic, and Google to share safety research represent one such entropy-fighting mechanism—a recognition that collaborative order produces better outcomes than competitive chaos.

We cannot let the AI ecosystem evolve in a vacuum any more than we can expect a room to spontaneously tidy itself. The laws of entropy remind us that beneficial order requires work—sustained, thoughtful intervention that respects the underlying dynamics of complex systems while guiding them toward states that benefit humanity. We would do well to remember this fundamental truth: governance is the energy we invest to fight the entropy that would otherwise consume our technological future. The question is not whether we should apply this energy, but how to apply it most effectively.

(Arindam Goswami is a Research Scholar at Takshashila Institution, working in the High-Tech Geopolitics Programme)

Disclaimer: These are the personal opinions of the author

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us: