Available for invited lectures, workshops and policy dialogues on accessibility, design and governance.
To:
Ms. Shravani Prakash, Ms. Tanu M. Goyal, and Ms. Chellsea Lauhka
c/o The Hindu, Chennai / Delhi, India
Subject: A Rejoinder to "The Upskilling Gap: Why Women Risk Being Left Behind by AI"
Dear Authors,
I write in response to your article, "The upskilling gap: why women risk being left behind by AI," published in The Hindu on 24 December 2025 [click here to read the article], with considerable appreciation for its clarity and rigour. Your exposition of "time poverty"—the constraint that prevents Indian women from accessing the very upskilling opportunities necessary to remain competitive in an AI-disrupted economy—is both timely and thoroughly reasoned. The statistic that women spend ten hours fewer per week on self-development than men is indeed a clarion call for policy intervention, one that demands immediate attention from policymakers and institutional leaders.
Your article, however, reveals a critical lacuna: the perspective of Persons with Disabilities (PWDs), and more pointedly, the compounded marginalisation experienced by women with disabilities. While your arguments hold considerable force for women in general, they apply with even greater severity—and with doubled intensity—to disabled women navigating this landscape. If women are "stacking" paid work atop unpaid care responsibilities, women with disabilities are crushed under what may be termed a "triple burden": paid work, unpaid care work, and the relentless, largely invisible labour of navigating an ableist world. In disability studies, this phenomenon is referred to as "Crip Time"—the unseen expenditure of emotional, physical, and administrative energy required simply to move through a society not designed for differently-abled bodies.
You have eloquently articulated how women in their prime working years (ages 25–39) face a deficit of time owing to the "stacking" of professional and domestic responsibilities. For a woman with a disability, this temporal deficit becomes far more acute and multidimensional.
Consider the following invisible labour burdens:
Administrative and Bureaucratic Labour. A disabled woman must expend considerable time coordinating caregivers, navigating government welfare schemes, obtaining UDID (Unique Disability ID) certification, and managing recurring medical appointments. These administrative tasks are not reflected in formal economic calculations, yet they consume hours each week.
Navigation Labour. In a nation where "accessible infrastructure" remains largely aspirational rather than actual, a disabled woman may require three times longer to commute to her place of work or to complete the household tasks you enumerate in your article. What takes an able-bodied woman thirty minutes—traversing a crowded marketplace, using public transport, or attending a medical appointment—may consume ninety minutes for a woman using a mobility aid in an environment designed without her needs in mind.
Emotional Labour. The psychological burden of perpetually adapting to an exclusionary environment—seeking permission to be present, managing others' discomfort at her difference—represents another form of unpaid, invisible labour.
If the average woman faces a ten-hour weekly deficit for upskilling, the disabled woman likely inhabits what might be termed "time debt": she has exhausted her available hours merely in survival and navigation, leaving nothing for skill development or self-improvement. She is not merely "time poor"; she exists in a state of temporal deficit.
Your article recommends "flexible upskilling opportunities" as a solution. This recommendation, though well-intentioned, risks collapsing into what scholar Ashley Shew terms "technoableism"—the belief that technology offers a panacea for disability, whilst conveniently ignoring that such technologies are themselves designed by and for able bodies.
The Inaccessibility of "Flexible" Learning. Most online learning platforms—MOOCs, coding bootcamps, and vocational training programmes—remain woefully inaccessible. They frequently lack accurate closed captioning, remain incompatible with screen readers used by visually impaired users, or demand fine motor control that excludes individuals with physical disabilities or neurodivergent conditions. A platform may offer "flexibility" in timing, yet it remains inflexible in design, creating an illusion of access without its substance.
The Burden of Adaptation Falls on the Disabled Person. Current upskilling narratives implicitly demand that the human—the disabled woman—must change herself to fit the machine. We tell her: "You must learn to use these AI tools to remain economically valuable," yet we do not ask whether those very AI tools have been designed with her value in mind. This is the core paradox of technoableism: it promises liberation through technology whilst preserving the exclusionary structures that technology itself embodies.
Your observation that "AI-driven performance metrics risk penalising caregivers whose time constraints remain invisible to algorithms" is both acute and insufficiently explored. Let us examine this with greater precision.
The Hiring Algorithm and the "Employment Gap." Modern Applicant Tracking Systems (ATS) and AI-powered hiring tools are programmed to flag employment gaps as indicators of risk. Consider how these gaps are interpreted differently:
For women, such gaps typically represent maternity leave, childcare, or eldercare responsibilities.
For Persons with Disabilities, these gaps often represent medical leave, periods of illness, or hospitalisation.
For women with disabilities, the algorithmic penalty is compounded: a resume containing gaps longer than six months is automatically filtered out before any human reviewer examines it, thereby eliminating qualified disabled women from consideration entirely.
Research audits have documented this discrimination. In one verified case, hiring algorithms flagged minority candidates disproportionately as needing human review because such candidates—inhibited by systemic bias in how they were evaluated—tended to give shorter responses during video interviews, which the algorithm interpreted as "low engagement".
Video Interviewing Software and Facial Analysis. Until its removal in January 2021, the video interviewing platform HireVue employed facial analysis to assess candidates' suitability—evaluating eye contact, facial expressions, and speech patterns as proxies for "employability" and honesty. This system exemplified technoableism in its purest form:
A candidate with autism who avoids direct eye contact is scored as "disengaged" or "dishonest," despite neuroscientific evidence that autistic individuals process information differently and their eye contact patterns reflect cognitive difference, not deficiency.
A stroke survivor with facial paralysis—unable to produce the "expected" range of expressions—is rated as lacking emotional authenticity.
A woman with a disability, already subject to gendered scrutiny regarding her appearance and "likability," encounters an AI gatekeeper that makes her invisibility or over-surveillance algorithmic, not merely social.
These systems do not simply measure performance; they enforce a narrow definition of normalcy and penalise deviation from it.
To substantiate these claims, consider these well-documented instances of algorithmic discrimination:
Speech Recognition and Dysarthria. Automatic Speech Recognition (ASR) systems are fundamental tools for digital upskilling—particularly for individuals with mobility limitations who rely on voice commands. Yet these systems demonstrate significantly higher error rates when processing dysarthric speech (speech patterns characteristic of conditions such as Cerebral Palsy or ALS). Recent research quantifies this disparity:
For severe dysarthria across all tested systems, word error rates exceed 49%, compared to 3–5% for typical speech.
Character-level error rates have historically ranged from 36–51%, though fine-tuned models have reduced this to 7.3%.
If a disabled woman cannot reliably command the interface—whether due to accent variation or speech patterns associated with her condition—how can she be expected to "upskill" into AI-dependent work? The platform itself becomes a barrier.
Facial Recognition and the Intersection of Race and Gender. The "Gender Shades" study, conducted by researchers at MIT, documented severe bias in commercial facial recognition systems, with error rates varying dramatically by race and gender:
Error rates for gender classification in lighter-skinned men: less than 0.8%
Error rates for gender classification in darker-skinned women: 20.8% to 34.7%
Amazon Rekognition similarly misclassified 31 percent of darker-skinned women. For a disabled woman of colour seeking employment or accessing digital services, facial recognition systems compound her marginalisation: she is simultaneously rendered invisible (failed detection) or hyper-surveilled (flagged as suspicious).
The Absence of Disability-Disaggregated Data. Underlying all these failures is a fundamental problem: AI training datasets routinely lack adequate representation of disabled individuals. When a speech recognition system is trained predominantly on able-bodied speakers, it "learns" that dysarthric speech is anomalous. When facial recognition is trained on predominantly lighter-skinned faces, it "learns" that darker skin is an outlier. Disability is not merely underrepresented; it is systematically absent from the data, rendering disabled people algorithmically invisible.
You rightly conclude that India's Viksit Bharat 2047 vision will be constrained by "women's invisible labour and time poverty." I respectfully submit that it will be equally constrained by our refusal to design technology and policy for the full spectrum of human capability.
True empowerment cannot mean simply "adding jobs," as your article notes. Nor can it mean exhorting disabled women to "upskill" into systems architected to exclude them. Rather, it requires three concrete interventions:
First, Inclusive Data Collection. Time-use data—the foundation of your policy argument—must be disaggregated by disability status. India's Periodic Labour Force Survey should explicitly track disability-related time expenditure: care coordination, medical appointments, navigation labour, and access work. Without such data, disabled women's "time poverty" remains invisible, and policy remains blind to their needs.
Second, Accessibility by Design, Not Retrofit. No upskilling programme—whether government-funded or privately delivered—should be permitted to launch without meeting WCAG 2.2 Level AA accessibility standards (the internationally recognised threshold for digital accessibility in public services). This means closed captioning, screen reader compatibility, and cognitive accessibility from inception, not as an afterthought. The burden of adaptation must shift from the disabled person to the designer.
Third, Mandatory Algorithmic Audits for Intersectional Bias. Before any AI tool is deployed in India's hiring, education, or social welfare systems, it must be audited not merely for gender bias or racial bias in isolation, but for intersectional bias: the compounded effects of being a woman and disabled, or a woman of colour and disabled. Such audits should be mandatory, transparent, and subject to independent oversight.
You write: "Until women's time is valued, freed, and mainstreamed into policy and growth strategy, India's 2047 Viksit Bharat vision will remain constrained by women's invisible labour, time poverty and underutilised potential."
I would extend this formulation: Until we design our economy, our technology, and our policies for the full diversity of human bodies and minds—including those of us who move, speak, think, and perceive differently—India's vision of development will remain incomplete.
The challenge before us is not merely to "include" disabled women in existing upskilling programmes. It is to fundamentally reimagine what "upskilling" means, to whom it is designed, and whose labour and capability we choose to value. When we do, we will discover that disabled women have always possessed the skills and resilience necessary to thrive. Our task is simply to remove the barriers we have constructed.
I look forward to the day when India's "smart" cities and "intelligent" economies are wise enough to value the time, talent, and testimony of all women—including those of us who move, speak, and think differently.
Yours faithfully,
Nilesh Singit
Distinguished Research Fellow
CDS, NALSAR &&
Founder, The Bias Pipeline
https://www.nileshsingit.org/