The controversy generated by the exit, near return and then triumphant homecoming of Sam Altman as the CEO of Open AI has made explicit many threads that had so far remained implicit or unspoken. These points have significant consequences for the global politics of knowledge raised by research around generative AI. That the launch of ChatGPT was nothing short of a tectonic shift in the field of computation is now beyond doubt. With over 100 million weekly users and climbing, the technology will leave no field of human life untouched in the days ahead. The fact that it has hastened a race for competing products by other tech giants (such as Google, Meta, Amazon and Apple) as well as independent start-ups points to how consequential to their survival these companies consider this technology to be. While all details around the controversy are still emerging, the statements and actions of key players, the dissolution of the earlier board as well as the constitution of a new one clearly point to a schism between two visions about the future of the technology.
The circumspect vision (affiliated with the Effective Altruism movement) that sought to put caution and safety ahead of monetisation and innovation competed with the seemingly centrist one that sought to moderate fears around safety while accelerating innovation with placing safeguards for risk and harm. The fact that its founders decided to launch it as a non-profit in 2015 and protect it from the profit-seeking structure and the pressures of monetisation shows they knew this was no ordinary technology. But, even as its initial stages could be funded through non-profit means, as the promise of the innovation became clearer and the need for computing power to scale it up mounted, corporate funding became indispensable, allowing Microsoft to commit a staggering 10 billion dollars to the company in January. That this early bet has paid off is evident in a whole suite of programs and applications that Microsoft has already enhanced or launched using Open AI's generative AI technology. Primary among them is Copilot, an AI assistant now found embedded across Microsoft products (including its browser Edge), and the chatbot function in its search engine Bing. These innovations have placed the company ahead of its traditional rivals in the AI race, including Google that for long seemed to be far outpacing Microsoft given its global dominance in search, online advertising and android (among others).
The ensuing panic has led to a scramble for a share of the emerging generative AI startup space including in Anthropic, founded by former Open AI employees, where both Google and Amazon now have significant investments. These changes have also included internal restructuring within these companies to prioritize generative AI as the next digital frontier or, as Satya Nadella put it, "the next major wave of computing". What does this next wave mean for global knowledge production and all the key questions that scholars in disciplines ranging from media studies, international relations or the humanities have long been concerned with? How must cultures outside and away from these centers of innovation, but likely to be significantly affected by them, prepare themselves for these oncoming changes?
While all aspects of our digital experience will be reshaped by this technology, perhaps the most consequential of them all is the ability of generative AI based on large language models to function as the oracle of our times. Its prophecies on questions about facts from Pizza recipes to the molecular structure of a chemical compound could very well transform the nature of knowledge. But when engaging with issues imbricated within complexities of ideology, politics, culture and history, it could perhaps stir more storms than satisfy curious minds. The many caveats it must currently give before answering any such questions points to the minefields that lie ahead as it is adopted more widely. If the current goals to achieve Average General Intelligence (AGI) pan out, GPTs could very well replace search engines as the primary means of sorting through the web's firehose of knowledge. The consequences of such a transition will be far reaching.
By responding with a list of relevant websites that best answer our query, Google search results maintain the illusion of user agency in shaping our digital trajectory. In doing so they also eschew any liability for the new and uncharted pathways we may embark upon. Generative AI with its monovocal responses that provide a final synthesis of knowledge instead of the chaotic plurality of Google search results, have to invariably make choices about what content to include and what to exclude in framing their responses. And despite all their caveats and equivocations they will also have to take positions on the burning questions of our age and take sides in the most pressing debates of our times. A fractious polity such as India's, where ongoing debates about the vision of the future and contestations about the past are enmeshed within the ever-unresolved questions of culture, identity and politics, is an example where a singular authoritative account on most things (despite the caveats and qualifiers) will only erase much needed complexity.
When taken to the global/international context, this singular authoritative voice poses critical questions such as whose voice, whose ideology, and whose culture. In a world teeming with the messy plurality of differences, any quest for universally acceptable answers have invariably privileged some perspectives over others. Arguably, the battles over competing worldviews and epistemes and a resistance to subsume global differences under singular explanations have defined humanistic scholarship of all kinds. When the formerly colonised nations of the global south made a case for more equity in the world of information and culture, implicit behind their pleas has been the idea that knowledge and power were inextricably intertwined and in fact cultural and symbolic power prepared the grounds for and endured longer than corporeal and material power. Those pleas take on an urgency today.
When the digital oracle speaks, whose nudges and prods will its so-called "neutral" voice be more influenced by? Given that its answers rely on its training datasets that could at least theoretically comprise all human knowledge, there could be good arguments made about generative AI representing the best of all human knowledge. But like that other famous experiment to create "the sum of all human knowledge" - Wikipedia - shows, the digital domain can at best reflect and at worst amplify the global inequities in knowledge production in the offline world. Large language models can only learn from what is already published online, thus placing at immediate disadvantage those knowledge forms that have not been digitized or worse, don't even exist in print form (e.g. oral cultures). These inequities will dampen any efforts by GPTs to give us the best of all human knowledge instead only limiting themselves to a narrow sub-set of it while treating it as the whole set.
Sangeet Kumar is a professor of media studies at Denison University and the author of The Digital Frontier: Infrastructures of Control on the Global Web (Indiana University Press, 2021).
Disclaimer: These are the personal opinions of the author.