Translate

Showing posts with label algorithmic bias. Show all posts
Showing posts with label algorithmic bias. Show all posts

Sunday, 29 March 2026

The Roots of Technoableism: Why Forced AI Coding Is Making the Web Less Accessible

A black and white editorial cartoon titled "TECHNO-ABLEISM OFFICE" illustrates a conflict over accessible design. A menacing robot labeled "ZABARDASTI AI" spews bubbles like "INACCESSIBLE" and "NO ARIA LABELS," while a shouting manager commands a stressed young programmer, "USE IT! MANDATORY ZABARDASTI! WE DON'T NEED YOUR 'ACCESSIBILITY' SLOWDOWN!" The programmer points to a computer, with a thought bubble quoting an article from The Hindu about forcing AI code making it brittle. An older man in a wheelchair reading THE TIMES comments that "WIPE CODING" for speed only "wipes the inclusion part."
The "Zabardasti AI" Mandate: A Cartoon on Corporate Techno-Ableism and Inaccessibility

John Xavier's article in The Hindu, published on 28 March 2026, raises a concern that deserves serious attention. He shows how companies that force artificial intelligence tools upon their employees tend to achieve the opposite of what they set out to do. Workers at Shopify, Duolingo, and Coinbase have been told, in effect, that refusing AI is the same as refusing a future at the company. The predictable result, as Xavier documents, is quiet sabotage: skipped training sessions, deliberately poor inputs to game dashboards, and a slow return to older methods. He argues that this failure is rooted in the destruction of psychological safety, the condition under which people feel genuinely free to take risks, ask questions, and speak up without fear of punishment.

This analysis is sound. When a mandate arrives as a threat, the natural human response is not enthusiasm but self-protection. Workers comply on the surface and resist underneath. Xavier is correct to say that treating a cultural challenge as though it were a process re-engineering problem is a category error at the leadership level.

The difficulty is that the article stops there. It treats forced AI adoption as a problem between the company and its employees. The harm that flows downstream, past the employee, to the people who will use the software that these employees produce, does not appear in the frame at all. That harm has a name. It is called technoableism, and in a country governed by the Rights of Persons with Disabilities Act 2016, and bound by the United Nations Convention on the Rights of Persons with Disabilities, it is not merely an ethical concern. It is a legal one.

What Technoableism Is

The concept was developed by Ashley Shew, a scholar at Virginia Tech, in her 2023 book Against Technoableism: Rethinking Who Needs Improvement. Shew argues that the technology sector operates almost entirely within the medical model of disability, which treats disabled bodies and minds as defects requiring correction. The non-disabled body is taken as the norm. Disabled people appear in this framework, if they appear at all, as special cases at the margins of design, not as primary stakeholders with rights.

This assumption finds its way into artificial intelligence systems in a simple and well-documented manner. AI coding tools learn from the code that already exists on the internet. That code was written, in the overwhelming majority of cases, without any attention to disability. It was written for users who see screens clearly, who navigate with a mouse, who can process information quickly, and who have no sensory or cognitive differences. When an AI coding assistant is trained on this material, it absorbs these assumptions. Its suggestions then carry them forward into new software, and the exclusion that was already present in old code is reproduced, at greater speed, in the new.

What the Research Demonstrates

This is not a theoretical risk. Researchers Prakriti Mowar and colleagues, whose work was published at the CHI Conference on Human Factors in Computing Systems in 2024, studied how developers actually behave when they use AI coding assistants to build web interfaces. Three problems appeared consistently. Developers routinely forgot to request accessible features from the tool. When the tool offered suggestions, developers accepted them even when the output was incomplete, for instance accepting a placeholder text attribute for an image without ever replacing it with a meaningful description. And developers found it genuinely difficult to verify, on their own, whether the code the tool had produced met any recognised accessibility standard.

Separate research examining code generated by tools such as ChatGPT and GitHub Copilot found that the output regularly broke basic rules. Headings appeared in the wrong order. Interactive elements lacked proper labels. Keyboard navigation failed entirely in several cases. One developer who tried supplying explicit Web Content Accessibility Guidelines to Copilot as part of the prompt still reported that the tool continued suggesting invalid heading structures. These are not cosmetic problems. A heading structure that is out of order is not an inconvenience for a sighted user navigating with a mouse. For a blind user relying on a screen reader, it can make a page entirely unusable.

Why the Mandate Compounds the Problem

On its own, an imperfect tool can be managed. A developer who has time and authority can review AI suggestions, reject the problematic ones, and add accessibility features before the code goes into production. This is not an efficient workflow, but it is a workable one. The corporate mandate removes the conditions under which that review is possible.

When managers measure performance by the number of lines of code pushed to a repository, or by token consumption, the incentive is to accept suggestions quickly and move on. Adding accessibility to AI-generated code is methodical work. It frequently requires rewriting significant sections of what the tool has produced, because accessible code depends on the logical structure of the document object model, not merely on how the interface looks on screen. In a mandate-driven environment where speed is the metric of value, that rewriting is the first task to be abandoned.

Xavier observes that some employees resist mandates by gaming metrics or reverting to old methods. There is, though, a second response that is less visible and considerably more damaging: compliance that is empty. The developer uses the tool, meets the daily target, ships the code, and nobody notices that the resulting product excludes a substantial number of its intended users. The resistance Xavier describes is at least honest about itself. The quiet compliance that produces inaccessible software at scale is harder to see and far more difficult to remedy once it is embedded in production systems.

The Cognitive Dimension

There is a further layer to this problem. Anthropic's own research from 2026 examined what happens to developer understanding when AI assistance becomes the primary mode of working. Developers who relied on AI assistance scored approximately 17 percentage points lower on tests of their comprehension of code they had produced just minutes earlier, compared with developers who had coded manually. That is not a marginal difference. It represents a genuine and measurable decline in domain knowledge.

This matters for accessibility because inclusive design requires a kind of technical empathy that depends on structural understanding. A developer who builds an interface from scratch must think through how a screen reader will interpret the document object model, how a keyboard user will move through a form without a mouse, and whether every interactive element announces its purpose clearly. When the developer instead accepts and lightly edits AI-generated code without deeply examining what has been produced, this thinking does not happen in the same way. The structural decisions have already been made by the algorithm, and the developer may not fully understand them. Accessibility, which depends on precisely that structural understanding, suffers accordingly.

The Bias Pipeline

Nilesh Singit, has written extensively on artificial intelligence and disability, describes what he calls a bias pipeline [click here to visit website]. The pipeline begins in the training data, which reflects a digital world built for non-disabled users, and runs through to the final product. At each stage, the assumption that the legitimate user is able-bodied and neurotypical is reinforced. Singit draws a careful and important distinction between accessibility and algorithmic bias. Accessibility asks whether a disabled person can use a system. Algorithmic bias asks whether the system was designed with that person in mind at all.

A system can be technically accessible, in the sense that a screen reader can navigate it, while remaining biased in its deeper assumptions: about who constitutes a normal job applicant, what a valid educational trajectory looks like, or what counts as a competent written response. An artificial intelligence recruitment tool might produce a portal that passes a Web Content Accessibility Guidelines check and yet penalise applicants for employment gaps resulting from medical treatment, or rate lower the communication style of a person with a cognitive difference. The portal is accessible. The pipeline is still biased. Singit's argument is that both problems must be addressed, and the current trajectory of AI coding mandates ensures that neither is.

The Legal Obligation in India

Xavier's article does not address the Indian legal framework, but it is directly relevant to the conversation he has started. The Rights of Persons with Disabilities Act 2016 places a positive obligation on establishments, including private entities, to ensure that their goods and services are accessible. The Supreme Court of India's judgment in Rajive Raturi v. Union of India affirmed that accessibility is not a dispensation but a right. India has also ratified the UNCRPD, whose Article 9 requires states, and by extension the organisations that operate within their jurisdiction, to take concrete steps to ensure that persons with disabilities can access information and communications technology on an equal basis with others.

When a company deploys software built by developers who were coerced into using AI tools that generate inaccessible code, it is not merely making a management error. It is producing a product that may violate these obligations. The corporate language around AI mandates speaks entirely in terms of productivity, tokens, and efficiency. That language has no room for the user who cannot read a screen or navigate with a mouse. The law, however, does make room for that user. Organisations that ignore this do so at considerable legal risk, and with considerable harm to the population of more than 26 million persons with disabilities recorded in India, a number that independent assessments suggest is substantially higher in reality.

The Vibe Coding Problem

Alongside AI mandates, there has grown a practice that some in the industry call vibe coding. The term describes an approach in which the developer types a general description of what is wanted, the AI produces a complete block of code, and the developer accepts it with minimal scrutiny. It is coding by approximation and acceptance rather than by deliberate craft. It is, in a certain sense, the logical product of the productivity mandate taken to its conclusion: if the measure of performance is volume of code delivered per day, vibe coding maximises the metric.

The consequences for accessibility are predictable and well-supported by evidence. When an entire application is assembled from AI-generated blocks that the developer has not examined in detail, the structural qualities on which accessibility depends are entirely at the mercy of the training data. And as the research by Mowar and colleagues shows, that training data does not reliably produce accessible output. What is more, when a developer who has not fully understood the generated structure attempts to retrofit accessibility, the modifications can cause failures elsewhere in the code. This creates a practical disincentive: the cost of making the code accessible is too high, the risk of breaking something else is too great, and the deadline is too close. The accessibility work does not happen.

What Ought to Be Done

The argument being made here is not that artificial intelligence shall have no role in software development. It shall, and there is no reason why it ought not. The argument is that the conditions under which AI tools are deployed determine whether they serve everyone or merely the majority.

Companies that choose to integrate AI coding assistants shall need to hold those tools to accessibility standards before adoption, not after. AI coding assistants ought to be evaluated on whether their output meets the Web Content Accessibility Guidelines by default, not merely on whether the output functions visually. Developers shall need to retain the time and professional authority to review suggestions critically, to reject inaccessible code, and to perform manual accessibility audits before deployment. This is incompatible with mandate regimes that measure performance purely by speed and volume.

Most importantly, disabled experts ought to be involved in the design and evaluation of AI systems from the outset. As Shew has argued, disabled individuals possess deep and practical expertise in navigating hostile architectures. That expertise is precisely what is required when building systems that are supposed to serve the full range of human users. Singit's proposed Inclusivity Stack and Disability-Smart Prompts, which embed accessibility requirements into the prompting framework of AI tools, point towards what genuine reform might look like. Until these standards are adopted and enforced, compelling developers to use AI tools is not a step towards inclusion. It is a step towards the automation of exclusion.

Conclusion

Xavier is right that forcing employees to use AI produces outcomes that no serious organisation ought to want: resistance, resentment, and metrics that measure activity rather than value. But the full cost of these mandates is not felt in the boardroom. It is felt by the person who arrives at a government portal and cannot submit a form because the keyboard navigation was never tested, by the job applicant whose screen reader cannot parse the heading structure of the recruitment interface, by the student who cannot access digital learning materials because the AI that generated them never considered that a user might need them in a different format.

These are not edge cases. They are citizens. They are rights-holders. And the legal frameworks that India has enacted and ratified exist precisely to ensure that the speed of the market does not override their claim to equal participation.

Zabardasti with artificial intelligence tools does more than demoralise employees, as Xavier correctly observes. It embeds technoableism into the infrastructure of the digital present. True efficiency is not measured in tokens consumed or lines of code shipped. It is measured in whether the systems that are built actually work for the people who need to use them.

References

  • World Wide Web Consortium (W3C). Web Content Accessibility Guidelines (WCAG) 2.2. W3C Recommendation, 5 October 2023.  Official standard: https://www.w3.org/TR/WCAG22/
  • Rajive Raturi v. Union of India & Ors., Writ Petition (C) No. 243 of 2005. Supreme Court of India. Judgment dated 8 November 2024 (2024 INSC 858).
    Indian Kanoon (full judgment): https://indiankanoon.org/doc/98908321/

Tuesday, 17 February 2026

TechnoAbleism in India’s AI Moment: Why Accessibility Is Not Enough

A vibrant abstract illustration showing people with disabilities interacting with digital systems, surrounded by AI symbols, datasets, and decision interfaces, highlighting tensions between accessibility and algorithmic bias.
When artificial intelligence is built on narrow assumptions of the “normal” user, accessibility features alone cannot prevent exclusion embedded within the algorithm itself.

India’s present moment in artificial intelligence is often described in terms of innovation, opportunity, and national technological leadership. The India AI Impact Summit brings global attention to how artificial intelligence is shaping governance, development, and social transformation. 

Within these discussions, disability is increasingly visible through conversations on accessibility, assistive technologies, and digital inclusion. This attention is important. For many years, disability was largely absent from technology policy debates. Yet, a deeper issue remains insufficiently examined: accessibility alone does not ensure inclusion when artificial intelligence systems themselves are shaped by structural bias.

Accessibility and bias are frequently treated as interchangeable ideas. They are not the same. Accessibility determines whether a person with disability can use a system. Bias determines whether the system was designed with that person in mind at all. When systems are built around assumptions about a so-called normal user, accessible interfaces merely allow disabled persons to enter environments that continue to exclude them through their internal logic. The interface may be open; the opportunity may still be closed.

This structural problem becomes visible in the rapidly expanding practice often called ‘vibe coding’, where developers use generative AI tools to create websites and software through simple prompts. When an AI coding assistant is asked to generate a webpage, the default output usually prioritises visual layouts, mouse-dependent navigation, and animation-heavy design. Accessibility features such as semantic structure, keyboard navigation, or screen-reader compatibility rarely appear unless they are explicitly demanded. The system has learned that the ‘default’ user is non-disabled because that assumption dominates the data from which it learned. As these outputs are reproduced across applications and services, exclusion becomes quietly automated.

Bias also appears in the decision-making systems that increasingly shape employment, education, financial access and public services. Hiring systems that analyse speech, expression, or behavioural patterns may interpret disability-related communication styles as indicators of low confidence or low performance. Speech recognition tools often struggle with atypical speech patterns. Vision systems may fail to recognise assistive devices correctly. These outcomes are not isolated technical errors. They arise because disability is often missing from training datasets, testing environments and design teams. When disability is absent from the design stage, the system internalises non-disabled behaviour as the baseline expectation.

Another less visible dimension of bias emerges from the way artificial intelligence systems classify behaviour. Many systems are trained to recognise patterns associated with what developers consider efficient, confident or normal interaction. When human diversity falls outside those patterns, the system may interpret difference as error. Research in AI ethics repeatedly shows that classification models tend to perform poorly when training datasets do not adequately represent disabled users, leading to systematic misinterpretation of speech, movement or communication styles. 

These classification failures are rarely dramatic; they appear as small inaccuracies that accumulate over time. A speech interface that repeatedly fails to understand a user, an automated assessment tool that consistently undervalues atypical communication, or a recognition system that misidentifies assistive devices can gradually shape unequal access to opportunities. As these outcomes arise from technical assumptions rather than explicit discrimination, they often remain invisible in public debates, even as their effects are widely experienced.

These patterns together reflect what disability scholars describe as techno-ableism - the tendency of technological systems to appear empowering while quietly reinforcing assumptions that favour non-disabled ways of functioning. Technologies may expand participation on the surface, yet the intelligence embedded within them continues to treat disability as deviation rather than diversity. A person with disability may be able to access the interface, log into the system or navigate the platform, yet still face exclusion through hiring algorithms, recognition systems, or automated decision tools that were never designed around diverse bodies and minds. The experience is not exclusion from technology, but exclusion within technology itself.

Public discussions frequently present disability mainly through assistive innovation: tools that help blind users read text, applications that assist persons with mobility impairments or systems designed for specific accessibility functions. These innovations are valuable and necessary. However, when disability appears only in assistive contexts, it is positioned as a specialised technological niche rather than a structural dimension of all artificial intelligence systems. The mainstream design pipeline continues to assume the non-disabled user as the default, while disability inclusion becomes an add-on layer introduced later.

India currently stands at a formative stage in shaping its artificial intelligence ecosystem. As public digital infrastructure, governance platforms and automated service systems expand, the assumptions embedded in present design choices will influence social participation for decades. If accessibility becomes the only measure of inclusion, structural bias risks becoming embedded within the foundations of emerging technological systems. Inclusion then becomes symbolic rather than substantive: systems appear inclusive because they are accessible, yet continue to produce unequal outcomes.

From the standpoint of persons with disabilities, this distinction is deeply personal. Accessibility determines whether we can interact with the system. Bias determines whether the system recognises us as equal participants once we enter. Accessible platforms built upon biased intelligence do not remove barriers; they simply move the barrier from the interface to the algorithm.

As a disability rights practitioner working at the intersection of law, accessibility, and technology, I view the present expansion of AI discussions with cautious attention. Disability is finally visible in national technology conversations, yet the focus remains concentrated on accessibility demonstrations rather than the deeper question of structural bias. Artificial intelligence will increasingly shape employment, governance, education and everyday social participation. Whether these systems expand equality or quietly reproduce exclusion will depend not only on whether they are accessible, but also on whose experiences shape the data, assumptions, and decision rules within them.

Accessibility opens the door; fairness determines what happens after entry. Without confronting bias directly, technological progress risks creating a future that is digitally reachable yet socially unequal for many persons with disabilities. Many of the issues discussed here, including the structural relationship between accessibility and algorithmic bias, are explored in greater detail at The Bias Pipeline (https://thebiaspipeline.nileshsingit.org), where readers may engage with further analysis.

References

  • India AI Impact Summit official information portal, Government of India.
  • Coverage of summit accessibility and inclusion themes, Business Standard and related reporting.
  • United Nations and global policy discussions on AI and disability inclusion.
  • Nilesh Singit, The Bias Pipeline https://thebiaspipeline.nileshsingit.org/

(Nilesh Singit is a disability rights practitioner and accessibility strategist working at the intersection of law, governance, and AI inclusion. A Distinguished Research Fellow at the Centre for Disability Studies, NALSAR University of Law, he writes on accessibility, techno-ableism, and algorithmic bias at www.nileshsingit.org)



Moneylife.in
Published 17th Fevruary 202

MonelifeLlogo
MoneyLife.in


Saturday, 31 January 2026

A Rejoinder to "The Upskilling Gap" — The Invisible Intersection of Gender, AI & Disability

 To:

Ms. Shravani Prakash, Ms. Tanu M. Goyal, and Ms. Chellsea Lauhka
c/o The Hindu, Chennai / Delhi, India

Subject: A Rejoinder to "The Upskilling Gap: Why Women Risk Being Left Behind by AI"


Dear Authors,

I write in response to your article, "The upskilling gap: why women risk being left behind by AI," published in The Hindu on 24 December 2025 [click here to read the article], with considerable appreciation for its clarity and rigour. Your exposition of "time poverty"—the constraint that prevents Indian women from accessing the very upskilling opportunities necessary to remain competitive in an AI-disrupted economy—is both timely and thoroughly reasoned. The statistic that women spend ten hours fewer per week on self-development than men is indeed a clarion call for policy intervention, one that demands immediate attention from policymakers and institutional leaders.

Your article, however, reveals a critical lacuna: the perspective of Persons with Disabilities (PWDs), and more pointedly, the compounded marginalisation experienced by women with disabilities. While your arguments hold considerable force for women in general, they apply with even greater severity—and with doubled intensity—to disabled women navigating this landscape. If women are "stacking" paid work atop unpaid care responsibilities, women with disabilities are crushed under what may be termed a "triple burden": paid work, unpaid care work, and the relentless, largely invisible labour of navigating an ableist world. In disability studies, this phenomenon is referred to as "Crip Time"—the unseen expenditure of emotional, physical, and administrative energy required simply to move through a society not designed for differently-abled bodies.

1. The "Time Tax" and Crip Time: A Compounded Deficit

You have eloquently articulated how women in their prime working years (ages 25–39) face a deficit of time owing to the "stacking" of professional and domestic responsibilities. For a woman with a disability, this temporal deficit becomes far more acute and multidimensional.

Consider the following invisible labour burdens:

Administrative and Bureaucratic Labour. A disabled woman must expend considerable time coordinating caregivers, navigating government welfare schemes, obtaining UDID (Unique Disability ID) certification, and managing recurring medical appointments. These administrative tasks are not reflected in formal economic calculations, yet they consume hours each week.

Navigation Labour. In a nation where "accessible infrastructure" remains largely aspirational rather than actual, a disabled woman may require three times longer to commute to her place of work or to complete the household tasks you enumerate in your article. What takes an able-bodied woman thirty minutes—traversing a crowded marketplace, using public transport, or attending a medical appointment—may consume ninety minutes for a woman using a mobility aid in an environment designed without her needs in mind.

Emotional Labour. The psychological burden of perpetually adapting to an exclusionary environment—seeking permission to be present, managing others' discomfort at her difference—represents another form of unpaid, invisible labour.

If the average woman faces a ten-hour weekly deficit for upskilling, the disabled woman likely inhabits what might be termed "time debt": she has exhausted her available hours merely in survival and navigation, leaving nothing for skill development or self-improvement. She is not merely "time poor"; she exists in a state of temporal deficit.

2. The Trap of Technoableism: When Technology Becomes the Problem

Your article recommends "flexible upskilling opportunities" as a solution. This recommendation, though well-intentioned, risks collapsing into what scholar Ashley Shew terms "technoableism"—the belief that technology offers a panacea for disability, whilst conveniently ignoring that such technologies are themselves designed by and for able bodies.

The Inaccessibility of "Flexible" Learning. Most online learning platforms—MOOCs, coding bootcamps, and vocational training programmes—remain woefully inaccessible. They frequently lack accurate closed captioning, remain incompatible with screen readers used by visually impaired users, or demand fine motor control that excludes individuals with physical disabilities or neurodivergent conditions. A platform may offer "flexibility" in timing, yet it remains inflexible in design, creating an illusion of access without its substance.

The Burden of Adaptation Falls on the Disabled Person. Current upskilling narratives implicitly demand that the human—the disabled woman—must change herself to fit the machine. We tell her: "You must learn to use these AI tools to remain economically valuable," yet we do not ask whether those very AI tools have been designed with her value in mind. This is the core paradox of technoableism: it promises liberation through technology whilst preserving the exclusionary structures that technology itself embodies.

3. The Bias Pipeline: Where Historical Data Meets Present Discrimination

Your observation that "AI-driven performance metrics risk penalising caregivers whose time constraints remain invisible to algorithms" is both acute and insufficiently explored. Let us examine this with greater precision.

The Hiring Algorithm and the "Employment Gap." Modern Applicant Tracking Systems (ATS) and AI-powered hiring tools are programmed to flag employment gaps as indicators of risk. Consider how these gaps are interpreted differently:

  • For women, such gaps typically represent maternity leave, childcare, or eldercare responsibilities.

  • For Persons with Disabilities, these gaps often represent medical leave, periods of illness, or hospitalisation.

  • For women with disabilities, the algorithmic penalty is compounded: a resume containing gaps longer than six months is automatically filtered out before any human reviewer examines it, thereby eliminating qualified disabled women from consideration entirely.

Research audits have documented this discrimination. In one verified case, hiring algorithms flagged minority candidates disproportionately as needing human review because such candidates—inhibited by systemic bias in how they were evaluated—tended to give shorter responses during video interviews, which the algorithm interpreted as "low engagement".​

Video Interviewing Software and Facial Analysis. Until its removal in January 2021, the video interviewing platform HireVue employed facial analysis to assess candidates' suitability—evaluating eye contact, facial expressions, and speech patterns as proxies for "employability" and honesty. This system exemplified technoableism in its purest form:

  • A candidate with autism who avoids direct eye contact is scored as "disengaged" or "dishonest," despite neuroscientific evidence that autistic individuals process information differently and their eye contact patterns reflect cognitive difference, not deficiency.

  • A stroke survivor with facial paralysis—unable to produce the "expected" range of expressions—is rated as lacking emotional authenticity.

  • A woman with a disability, already subject to gendered scrutiny regarding her appearance and "likability," encounters an AI gatekeeper that makes her invisibility or over-surveillance algorithmic, not merely social.

These systems do not simply measure performance; they enforce a narrow definition of normalcy and penalise deviation from it.

4. Verified Examples: The "Double Glitch" in Action

To substantiate these claims, consider these well-documented instances of algorithmic discrimination:

Speech Recognition and Dysarthria. Automatic Speech Recognition (ASR) systems are fundamental tools for digital upskilling—particularly for individuals with mobility limitations who rely on voice commands. Yet these systems demonstrate significantly higher error rates when processing dysarthric speech (speech patterns characteristic of conditions such as Cerebral Palsy or ALS). Recent research quantifies this disparity:

  • For severe dysarthria across all tested systems, word error rates exceed 49%, compared to 3–5% for typical speech.​

  • Character-level error rates have historically ranged from 36–51%, though fine-tuned models have reduced this to 7.3%.​

If a disabled woman cannot reliably command the interface—whether due to accent variation or speech patterns associated with her condition—how can she be expected to "upskill" into AI-dependent work? The platform itself becomes a barrier.

Facial Recognition and the Intersection of Race and Gender. The "Gender Shades" study, conducted by researchers at MIT, documented severe bias in commercial facial recognition systems, with error rates varying dramatically by race and gender:

  • Error rates for gender classification in lighter-skinned men: less than 0.8%

  • Error rates for gender classification in darker-skinned women: 20.8% to 34.7%​

Amazon Rekognition similarly misclassified 31 percent of darker-skinned women. For a disabled woman of colour seeking employment or accessing digital services, facial recognition systems compound her marginalisation: she is simultaneously rendered invisible (failed detection) or hyper-surveilled (flagged as suspicious).​

The Absence of Disability-Disaggregated Data. Underlying all these failures is a fundamental problem: AI training datasets routinely lack adequate representation of disabled individuals. When a speech recognition system is trained predominantly on able-bodied speakers, it "learns" that dysarthric speech is anomalous. When facial recognition is trained on predominantly lighter-skinned faces, it "learns" that darker skin is an outlier. Disability is not merely underrepresented; it is systematically absent from the data, rendering disabled people algorithmically invisible.

5. Toward Inclusive Policy: Dismantling the Bias Pipeline

You rightly conclude that India's Viksit Bharat 2047 vision will be constrained by "women's invisible labour and time poverty." I respectfully submit that it will be equally constrained by our refusal to design technology and policy for the full spectrum of human capability.

True empowerment cannot mean simply "adding jobs," as your article notes. Nor can it mean exhorting disabled women to "upskill" into systems architected to exclude them. Rather, it requires three concrete interventions:

First, Inclusive Data Collection. Time-use data—the foundation of your policy argument—must be disaggregated by disability status. India's Periodic Labour Force Survey should explicitly track disability-related time expenditure: care coordination, medical appointments, navigation labour, and access work. Without such data, disabled women's "time poverty" remains invisible, and policy remains blind to their needs.

Second, Accessibility by Design, Not Retrofit. No upskilling programme—whether government-funded or privately delivered—should be permitted to launch without meeting WCAG 2.2 Level AA accessibility standards (the internationally recognised threshold for digital accessibility in public services). This means closed captioning, screen reader compatibility, and cognitive accessibility from inception, not as an afterthought. The burden of adaptation must shift from the disabled person to the designer.​

Third, Mandatory Algorithmic Audits for Intersectional Bias. Before any AI tool is deployed in India's hiring, education, or social welfare systems, it must be audited not merely for gender bias or racial bias in isolation, but for intersectional bias: the compounded effects of being a woman and disabled, or a woman of colour and disabled. Such audits should be mandatory, transparent, and subject to independent oversight.

Conclusion: A Truly Viksit Bharat

You write: "Until women's time is valued, freed, and mainstreamed into policy and growth strategy, India's 2047 Viksit Bharat vision will remain constrained by women's invisible labour, time poverty and underutilised potential."

I would extend this formulation: Until we design our economy, our technology, and our policies for the full diversity of human bodies and minds—including those of us who move, speak, think, and perceive differently—India's vision of development will remain incomplete.

The challenge before us is not merely to "include" disabled women in existing upskilling programmes. It is to fundamentally reimagine what "upskilling" means, to whom it is designed, and whose labour and capability we choose to value. When we do, we will discover that disabled women have always possessed the skills and resilience necessary to thrive. Our task is simply to remove the barriers we have constructed.

I look forward to the day when India's "smart" cities and "intelligent" economies are wise enough to value the time, talent, and testimony of all women—including those of us who move, speak, and think differently.

Yours faithfully,

Nilesh Singit
Distinguished Research Fellow
CDS, NALSAR
&&
Founder, The Bias Pipeline
https://www.nileshsingit.org/

Friday, 26 December 2025

Prototype — Accessible to Whom? Legible to What?

 

Abstract

Artificial Intelligence (AI) has transformed the terrain of possibility for assistive technology and inclusive design, but continues to perpetuate complex forms of exclusion rooted in legibility, bias, and tokenism. This paper critiques current paradigms of AI prototyping that centre “legibility to machines” over accessibility for disabled persons, arguing for a radical disability-led approach. Drawing on international law, empirical studies, and design scholarship, the analysis demonstrates why prototyping is neither neutral nor technical, but a deeply social and political process. Building from case studies in recruiting, education, and healthcare technology failures, this work exposes structural biases in training, design, and implementation—challenging designers and policymakers to move from “designing for” and “designing with” to “designing from” disability and difference.

Introduction

Prototyping is celebrated in engineering and design as a space for creativity, optimism, and risk-taking—a laboratory for the future. Yet, for countless disabled persons, the prototype is also where inclusion begins… or ends. For them, optimism is often tempered by the unspoken reality that exclusion most often arrives early and quietly, disguised as technical “constraints,” market “priorities,” or supposedly “objective” code. When prototyping occurs, it rarely asks: accessible to whom, legible to what?

This question—so simple, so foundational—is what this paper interrogates. The rise of Artificial Intelligence has intensified the stakes because AI prototypes increasingly determine who is rendered visible and included in society’s privileges. Legibility, not merely accessibility, is becoming the deciding filter; if one’s body, voice, or expression cannot be rendered into a dataset “comprehensible” to AI, one may not exist in the eyes of the system. Thus, we confront a new and urgent precipice: machinic inclusion, machinic exclusion.

This work expands the ideas presented in recent disability rights speeches and debates, critically interrogating how inclusive design must transform both theory and practice in the age of AI. It re-interprets accessibility as a form of knowledge and participation—never a technical afterthought.

Accessibility as Relational, Not Technical

Contemporary disability studies and the lived experiences of activists reject the notion that accessibility is a mere checklist or add-on. Aimi Hamraie suggests that “accessibility is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.”1 Just as building a ramp after a staircase is an act of remediation rather than inclusion, most AI prototyping seeks to retrofit accessibility, arguing it is too late, too difficult, or too expensive to embed inclusiveness from the outset.

Crucially, these arguments reflect broader epistemologies: those who possess the power to design, define the terms of recognition. Accessibility is not simply about “opening the door after the fact,” but questioning why the door was placed in an inaccessible position to begin with.

This critique leads us to re-examine prototyping practices through a disability lens, asking not only “who benefits” but also “who is recognised.” Evidence throughout the AI industry reveals a persistent confusion between accessibility for disabled persons and legibility for machines, a theme critically examined in subsequent sections.

Legibility and the Algorithmic Gaze

Legibility, distinct from accessibility, refers to the capacity of a system to recognise, process, and make sense of a body, voice, or action. Within the context of AI, non-legible phenomena—those outside dominant training data—simply vanish. People with non-standard gait, speech, or facial expressions are “read” by the algorithm as errors or outliers.

What are the implications of placing legibility before accessibility?

Speech-recognition models routinely misinterpret dysarthric voices, excluding those with neurological disabilities. Facial recognition algorithms have misclassified disabled expressions as “threats” or “system errors,” because their datasets contain few, if any, disabled exemplars. In the workplace, résumé-screening AI flags gaps or “unusual experience,” disproportionately rejecting those with disability-induced employment breaks. In education, proctoring platforms flag blind students for “cheating”, unable to process their lack of eye gaze at the screen as a legitimate variance.

These failures do not arise from random error. They are products of a pipeline formed by unconscious value choices made at every stage: training, selection, who participates, and who is imagined as the “user.”

In effect, machinic inclusiveness transforms the ancient bureaucracy of bias from paper to silicon. The new filter is not the form but the invisible code.

The Bias Pipeline: What Goes In, Comes Out Biased

Bias in AI does not merely appear at the end of the process; it is present at every decision point. One stark experiment submitted pairs of otherwise identical résumés to recruitment-screening platforms: one indicated a “Disability Leadership Award” or advocacy involvement, the other did not. The algorithm ranked the “non-disability” version higher, asserting that highlighting disability meant “reduced leadership emphasis,” “focus diverted from core job responsibilities,” or “potential risk.”

This is not insignificant. Empirical studies have reproduced such results across tech, finance, and education, showing systemic discrimination by design. Qualified disabled applicants are penalised for skills, achievements, and community roles that are undervalued or alien to training data.

Much as ethnographic research illuminated the “audit culture” in public welfare (where bureaucracy performed compliance rather than delivered services), so too does “audit theatre” manifest in AI. Firms invite disabled people to validate accessibility only after the design is final. In true co-design, disabled persons must participate from inception, defining criteria and metrics on equal footing. This gap—between performance and participation—is the site where bias flourishes.

The Trap of Tokenism

Tokenism is an insidious and common problem in social design. In disability inclusion, it refers to the symbolic engagement of disabled persons for validation, branding, or optics—rather than for genuine collaboration.

Audit theatre, in AI, occurs when disabled people are surveyed, “consulted,” or reviewed, but not invited into the process of design or prototyping. The UK’s National Disability Survey was struck down for failing to meaningfully involve stakeholders. Even the European Union’s AI Act, lauded globally for progressive accessibility clauses, risks tokenism by mandating involvement but failing to embed robust enforcement mechanisms.

Most AI developers receive little or no formal training in accessibility. When disability emerges in their worldview, it is cast in terms of medical correction—not lived expertise. Real participation remains rare.

Tokenism has cascading effects: it perpetuates design choices rooted in non-disabled experience, licenses shallow metrics, and closes the feedback loop on real inclusion.

Case Studies: Real-World Failures in Algorithmic Accessibility

AI Hiring Platforms and the “Disability Penalty”

Automated CV-screening tools systematically rank curricula vitae containing disability-associated terms lower, even when qualifications are otherwise stronger. Companies like Amazon famously scrapped AI recruitment platforms after discovering they penalised women, but similar audits for disability bias are scarce. Companies using video interview platforms have reported that candidates with stroke, autism, or other disability-related facial expressions score lower due to misinterpretation.

Online Proctoring and Educational Technology in India

During the COVID-19 pandemic, the acceleration of edtech platforms in India promised transformation. Yet, blind and low-vision students were flagged as “cheating” for not making “required” eye contact with their devices. Zoom and Google Meet upgraded accessibility features, but failed to address core gaps in their proctoring models.

Reports from university students showed that requests for alternative assessments or digital accommodations were often denied on the grounds of technical infeasibility.

Healthcare Algorithms and Diagnostic Bias

Diagnostic risk scores and triaging algorithms trained on narrow datasets exclude non-normative disability profiles. Health outcomes for persons with rare, chronic, or atypical disabilities are mischaracterised, and recommended interventions are mismatched.

Each failure traces back to inaccessible prototyping.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype. Disability-led design reframes accessibility—not as a requirement for “special” needs but as expertise that enriches technology. It asks not “How can you be fixed?” but “What knowledge does your experience bring to designing the machine?”

Major initiatives are emerging. Google’s Project Euphonia enlists disabled participants to re-train speech models for atypical voices, but raises ethical debates on data ownership, exploitation, and who benefits. More authentic still are community-led mapping projects where disabled coders and users co-create AI mapping tools for urban navigation, workspace accessibility, and independent living. These collaborations move slowly but produce lasting change.

When accessibility is led by disabled persons, reciprocity flourishes: machine and user learn from each other, not simply predict and consume.

Sara Hendren argues, “design is not a solution, it is an invitation.” Where disability leads, the invitation becomes mutual—technology contorts to better fit lives, not the reverse.

Policy, Law, and Regulatory Gaps

The European Union’s AI Act is rightly lauded for Article 16 (mandating accessibility for high-risk AI systems) and Article 5 (forbidding exploitation of disability-related vulnerabilities), as well as public consultation. Yet, the law lacks actionable requirements for collecting disability-representative data—and overlooks the intersection of accessibility, data ownership, and research ethics.

India’s National Strategy for Artificial Intelligence, along with “AI for Inclusive Societal Development,” claims “AI for All” but omits specific protections, data models, or actionable recommendations for disabled persons—this despite the Supreme Court’s Rajiv Raturi judgment upholding accessibility as a fundamental right. Implementation of the Rights of Persons with Disabilities Act, 2016, remains loose, and enforcement is sporadic.

The United States’ ADA and Section 508 have clearer language, but encounter their own enforcement challenges and retrofitting headaches.

Ultimately, policy remains disconnected from practice. Prototyping and design must close the gap—making legal theory and real inclusiveness reciprocal.

Intersectionality: Legibility Across Difference

Disability is never experienced in isolation: it intersects with gender, caste, race, age, and class. Women with disabilities face compounded discrimination in hiring, healthcare, and data representation. Caste-based exclusions are rarely coded into AI training practices, creating models that serve only dominant groups.

For example, the exclusion of vernacular languages in text-to-speech software leaves vast rural disabled communities voiceless in both policy and practical tech offerings. Ongoing work by Indian activists and community innovators seeks to produce systems and data resources that represent the full spectrum of disabled lives, but faces resistance from resource constraints, commercial priorities, and a lack of institutional support.

Rethinking the Fundamentals: Prototyping as Epistemic Justice

Epistemic justice—ensuring that all knowledge, experience, and ways of living are valued in the design of social and technical systems—is both a theoretical and a practical necessity in AI. Bias springs not only from bad data or oversight but by failing to recognise disabled lives as valid sources of expertise.

Key steps for epistemic justice in prototyping include:

  • Centre disabled expertise from project inception, defining metrics, incentives, and feedback loops.

  • Use disability as a source of innovation, not just compliance: leverage universal design to produce systems more robust for all users.

  • Address intersectionality in datasets, training and testing for compounded bias across race, gender, language, and class.

  • Create rights-based governance in tech companies, embedding accessibility into KPIs and public review.

Recommendations: Designing From Disability

The future of inclusive AI depends on three principal shifts:

  1. From designing for to designing with: genuine co-design, not audit theatre, where disabled participants shape technology at every stage.

  2. From accessibility as compliance to accessibility as knowledge: training developers, engineers and policymakers to value lived disability experience.

  3. From compliance to creativity: treating disability as “design difference”—a starting point for innovation, not merely a deficit.

International law and national policy must recognise the lived expertise of disability communities. Without this, accessibility remains a perpetual afterthought to legibility.


Conclusion

Accessible to whom, legible to what? This question reverberates through every level of prototype, product, and policy.

If accessibility is left to the end, if legibility for machines becomes the touchstone, humanity is reduced, difference ignored. When disability leads the design journey, technology is not just machine-readable; it becomes human-compatible.

The future is not just about teaching machines to read disabled lives—but about allowing disabled lives to rewrite what machines can understand.


References

  • Aimi Hamraie, Building Access: Universal Design and the Politics of Disability (University of Minnesota Press, 2017).

  • Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning.” fairmlbook.org, 2019.

  • Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1–15.

  • Leavy, Siobhan, Eugenia Siapera, Bethany Fernandez, and Kai Zhang. “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models.” Proceedings of the 2024 ACM FAccT.

  • Sara Hendren. What Can a Body Do? How We Meet the Built World (Riverhead, 2020).

  • National Strategy for Artificial Intelligence, NITI Aayog, Government of India, 2018.

  • Rajiv Raturi v. Union of India, Supreme Court of India, AIR 2012 SC 651.

  • European Parliament and Council, Artificial Intelligence Act, 2023.

  • Google AI Blog. “Project Euphonia: Helping People with Speech Impairments.” May 2019.

  • “Making AI Work for Everyone,” Google Developers, 2022.

  • Amazon Inc., “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.

  • United Kingdom High Court, National Disability Survey ruling, 2023.

  • Nita Ahuja, “Online Proctoring as Algorithmic Injustice: Blind Students in Indian EdTech,” Journal of Disability Studies, vol. 12, no. 2 (2022): 151-177.

  • United Nations, Convention on the Rights of Persons with Disabilities, Resolution 61/106 (2006).

  • [Additional references on intersectionality, design theory, empirical studies, Indian law, US/EU regulation, and case material]

Popular Posts