Every time a new jobs report drops and AI is mentioned in the same breath, the conversation follows a familiar script. How many roles are at risk. Which sectors will recover. Whether universal basic income is the answer. Whether reskilling pipelines can move fast enough. The debate is urgent, well-funded, and organized around a single question:

what happens to labour?

I want to argue that this is the wrong question. Not an incomplete question. The wrong one.

India’s IT sector employed between 7.5 and 8 million people by 2023. The sector shed over 50,000 jobs in 2024, mostly at entry level. NITI Aayog’s October 2025 roadmap projects a further contraction to approximately 6 million workers by 2031. These numbers are stark. But they do not capture what is actually being lost, and until policy frameworks reckon with that, the interventions we design will address the symptom while the disease deepens.

What is being lost is significance. Not income. Not employment.

The social condition in which human judgment, effort, and presence are experienced as mattering, by the person and by the community around them.

This is not a therapeutic observation. It is a structural one. And it requires a different kind of policy response than anything currently on the table.

The labour economics frame has a long and reasonably successful history. When the mechanisation of textile production displaced English handloom weavers, when electrification reorganised manufacturing, when computerisation hollowed out clerical work, the policy response was always the same: retrain displaced workers for the next tier of the economy. Each time, the logic held, at least partially, because the disruption affected work that was primarily manual or routine. A displaced weaver could retrain as a factory operative. A bookkeeper replaced by accounting software could retrain as a systems administrator. The market had a destination, and the policy problem was to get workers there.

Generative AI has broken that logic categorically. The destination is now automated too. AI systems already score in the top 10 percent on bar exam simulations, answer approximately 90 percent of medical licensing questions correctly, and on software engineering benchmarks, moved from solving 4.4 percent of problems in 2023 to 71.7 percent by the end of 2024. The knowledge economy is not the promised land anymore. The reskilling pipeline leads to a platform where the AI is already waiting.

Even the most serious critics of mainstream AI policy remain trapped inside the labour frame. Daron Acemoglu’s influential argument that AI is being directed toward labour replacement rather than labour augmentation is a significant challenge to techno-optimism. But it is still asking how to maximise human economic participation in an AI-reorganised market. It does not ask what happens to human significance when the social condition of mattering can no longer be secured by economically productive labour at all.

That is the question India urgently needs to ask.

The ILO’s India Employment Report for 2024 found that the proportion of educated youth who are unemployed doubled from 35.2 percent in 2000 to 65.7 percent in 2022. A 2024 IIM Ahmedabad study found that 68 percent of white-collar employees expected AI to partially or fully automate their jobs within five years. These are not just economic projections. They are a forecast for the dissolution of a social architecture that millions of families built their lives around.

Consider what the IT sector’s expansion actually meant for Indian households from the 1990s onward. For families whose social standing had been organized around agricultural land, government service, or local trade, a son or daughter in software was not merely a source of income. It was evidence that the family’s position in the emerging order was secured. The software engineer’s salary was never just money. It was the material expression of a reorganized social identity, legible within the family, within the community, across caste lines that had previously fixed people’s prospects before they were born.

When AI displaces a paralegal in Mumbai or a junior analyst in Bengaluru, it does not merely take a salary. It takes the social legibility the role carried. The position in the family hierarchy. The authority within the community. The answer to the question “what do you do?” that has organized self-worth for a generation.

Western policy discourse addresses this through what I call the therapeutic register: loneliness, purpose, meaning, individual wellbeing. These categories are structurally foreign to the Indian context, where significance is not an inner state that individuals cultivate but a social fact that communities produce and assign. Who you are is not separable from what you do and from what your family’s relationship to the world of work has been.

The caste and gender dimensions make this more acute, not less. MIT Technology Review’s 2025 investigation found systematic caste bias encoded in AI outputs across multiple models. A Dalit researcher using an AI tool to polish an academic application found his surname silently changed to an upper-caste one, the system having determined that the upper-caste name appeared more frequently in its training data on academic circles. This is not an isolated glitch. It is a precise example of how AI systems reproduce, at industrial scale, the social legibility hierarchies that caste has organized for centuries. For women, the IMF’s 2024 data shows that in high-income countries, 9.6 percent of female employment falls into the highest AI-exposure category, nearly three times the proportion for male jobs. In India, where women’s formal sector participation is concentrated in exactly the cognitive service roles most exposed to AI substitution, the scale of that exposure is a structural threat to a renegotiation of gendered significance that has barely begun.

The standard policy toolkit, reskilling programs, income support, UBI proposals, is not cynical. It reflects genuine concern. But as a framework for addressing what I call the dignity deficit, it will fail structurally.

A World Bank study on Rohingya refugees found that employment improved psychosocial wellbeing at a magnitude four times greater than an equivalent cash transfer. The mechanism was not income. It was the felt experience of contributing to something. When cash is delivered without contribution, it does not restore the social architecture. Research on UBI experiments has found suggestive evidence that income decoupled from work can actually reduce a recipient’s perceived status within the family. Policy designed to address a labour market problem cannot address a relational collapse. It will reinforce the premise that generated it, that human worth is contingent on productivity as measured by markets, precisely when that market is contracting fastest for the people who have had the least margin for error.

The governance shift I am arguing for treats the preservation of social architectures for human contribution as a policy goal in its own right, not a residual to be addressed after economic adjustment. This means requiring Social Architecture Impact Assessments before large-scale AI deployment, building Relational Employment Indices into existing labour surveys, mandating Human Presence Floors in high-stakes domains like education and welfare, and requiring that algorithmic decisions affecting citizens be explained in the language of the person affected.

The Indian government is one of the largest AI procurers in the world. Every AI system procured with public funds should be required to pass a caste and gender bias audit before deployment. Not as a precaution against hypothetical risk but as a documented operational necessity.

The dignity deficit is not a future risk. It is already underway in the dissolved junior cohorts of Bengaluru’s IT sector, in the Dalit academic whose AI-rewritten application signals an identity he did not choose, in the woman whose white-collar significance is being automated faster than governance can register it. The question is not whether policy will eventually address this. It is whether it will address the actual problem, or continue to invest, expensively and earnestly, in the wrong frame.



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE





Source link