This Article is From Jun 20, 2023

Opinion: Addressing AI Anxiety - A Lesson from Zerodha's Playbook

Over the past weeks, I've been talking to different people from diverse corners of the IT industry in Bengaluru. The narrative is evolving. An eerie undertone is emerging from these conversations, painting a disconcerting picture of how AI and automation are disrupting lives in very personal ways.

In a cafe on a rainy afternoon, I found myself across the table from an employee of a large IT services company. He had been a software tester for a decade, his experienced eyes the gatekeepers between software flaws and smooth user experiences. But then the email came. His role, he was told, was being made redundant by automation technologies.

"There was no warning, no transition. Just an abrupt goodbye to a decade's worth of effort and expertise," the IT services engineer said, his fingers absent-mindedly stirring an untouched coffee. In his story, the human dimension to AI's ascent was not one of efficiency or triumph.

It was a story of displacement and anxiety.

Another day, in another part of the city, a young engineer from a buzzing startup shared his tale. Her eyes, filled with dreams of participating in India's booming startup ecosystem, clouded over as she narrated how her role was taken over by GPT algorithms. The efficient future had cost her the job.

"They called it efficiency," she said with a bitter chuckle, looking out of the cafe window. "But it's hard to cheer for efficiency when your job is the cost."

As we inch forward into the AI revolution, these stories are a stark reminder of the human cost that often gets lost amidst the chatter about efficiencies and advancements. As corporations increasingly integrate AI into their operations, the genuine human anxiety surrounding this transformation must be addressed.

Enter Kailash Nadh, CTO of Zerodha, India's largest retail stockbroker. His recent blog post offered a compelling narrative on how his organization is navigating the choppy waters of AI implementation. Recognizing the AI-induced turbulence, Nadh steered Zerodha towards creating an "AI anxiety policy".

You can read Nadh's blog, "This time, it feels different."

How Zerodha is Confronting AI Anxiety

When Zerodha adopted the GPT-4 model, it wasn't for the hype or an advertising stunt. It was because it brought tangible, measurable benefits, swiftly integrating into the organizational fabric and performing at an uncanny level. This "highly sophisticated, powerful, poorly understood black box system" swiftly showed its capacity to impact job roles, potentially making a sizeable fraction of them obsolete.

Rather than succumbing to the undeniable benefits of AI and wielding the axe on jobs, Zerodha consciously chose a humanistic approach. The company created a 'policy for AI anxiety,' stating, "No one at Zerodha will lose their job if a technology implementation (AI or non-AI) directly renders their existing responsibilities and tasks obsolete."

Instead, efforts would be made to reorient and reskill individuals, creating new opportunities for growth within the organization.

Zerodha's approach serves ray of hope and sets a precedent that other organizations can emulate. It is a model that needs to inspire HR leaders, CEOs, and founders across India's IT landscape - from large services companies to emerging startups. Navigating the AI revolution need not be a zero-sum game.

To be sure, this policy isn't an unconditional AI shield or a failsafe against the relentless march of AI. But it serves as a commitment to the people behind the machines, a pledge to acknowledge the human anxieties accompanying these advancements. It underlines the need for empathy when leveraging transformative technologies. With this policy, Zerodha puts the conversation about AI in its right place - focused not merely on what AI can do but also on what it should do and how it impacts the most critical element of any organization - its people.

As Stephen Hawking said, "AI could be the worst event in the history of our civilization. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many." It is incumbent upon leaders to ensure that AI is harnessed responsibly.

Given India's demographic advantage, we must pause and question: Can we afford to make this all-Hobson's choice? As the world's largest young population, we must learn to ride the wave of AI, not get swept away by it.

As we face an AI-infused future, Zerodha's playbook offers lessons: it's always better to be prepared than to be surprised.

Humans and AI

Building on what Kailash Nadh wrote, I am compelled to look beyond AI's immediate, more apparent threats. Even as we grapple with the real prospects of job obsolescence, there lies a subtle, almost imperceptible danger - the steady erosion of human agency.

Nadh's words resonate as he presents a future where corporations, governments, and societies might offload a growing number of decision-making systems to AI for efficiency and convenience. This trend may initially seem innocuous, like frogs content in slowly boiling water, oblivious of the impending peril. As we find ourselves in an era driven by a peculiar blend of FOMO and frenzy, we may be unknowingly nudged towards these technologies when the alternatives lag in efficiency.

This situation risks creating a world where decision-making processes become increasingly opaque, untraceable, and unexplainable. The current narratives of overengineering and multi-layered abstractions would have more say than our own voices. Nadh's thought experiment about the horrors of automated online account blocks, currently a frustrating yet contained nuisance extending to the societal level, sends a chill down my spine.

Human agency is not a negotiable commodity; it is the essence of our individuality, the spark that makes us more than the sum of our biological parts. Our will and capacity to make conscious decisions should not and cannot be usurped by any AI, however advanced or efficient.

So as we stand on the cusp of this AI-powered future, let us remember Kailash Nadh's fear about the gradual erosion of human agency. His paranoia, like his prescience, is worth noting. The climate change analogy hasn't aged well; one can only hope this apprehension does. Let it not be another dire warning lost and only to be found when it's too late.

Nadh hopes his fears will age like milk, but he prepares as if they will not. A mix of paranoia and prescience seems apt in our times.

It's clear that as we continue to integrate AI into our workflows, the human dimension of this transformation cannot be ignored. Embracing AI does not necessitate surrendering our humanity. On the contrary, it allows us to reimagine our roles and redefine our organizations. The road ahead is not about choosing AI over people but about using AI to uplift people.

(Pankaj Mishra has been a journalist for over two decades and is the co-founder of FactorDaily.)

Disclaimer: These are the personal opinions of the author.

.