Showing posts with label digital rights. Show all posts
Showing posts with label digital rights. Show all posts

Saturday, 31 January 2026

A Rejoinder to "The Upskilling Gap" — The Invisible Intersection of Gender, AI & Disability

 To:

Ms. Shravani Prakash, Ms. Tanu M. Goyal, and Ms. Chellsea Lauhka
c/o The Hindu, Chennai / Delhi, India

Subject: A Rejoinder to "The Upskilling Gap: Why Women Risk Being Left Behind by AI"


Dear Authors,

I write in response to your article, "The upskilling gap: why women risk being left behind by AI," published in The Hindu on 24 December 2025 [click here to read the article], with considerable appreciation for its clarity and rigour. Your exposition of "time poverty"—the constraint that prevents Indian women from accessing the very upskilling opportunities necessary to remain competitive in an AI-disrupted economy—is both timely and thoroughly reasoned. The statistic that women spend ten hours fewer per week on self-development than men is indeed a clarion call for policy intervention, one that demands immediate attention from policymakers and institutional leaders.

Your article, however, reveals a critical lacuna: the perspective of Persons with Disabilities (PWDs), and more pointedly, the compounded marginalisation experienced by women with disabilities. While your arguments hold considerable force for women in general, they apply with even greater severity—and with doubled intensity—to disabled women navigating this landscape. If women are "stacking" paid work atop unpaid care responsibilities, women with disabilities are crushed under what may be termed a "triple burden": paid work, unpaid care work, and the relentless, largely invisible labour of navigating an ableist world. In disability studies, this phenomenon is referred to as "Crip Time"—the unseen expenditure of emotional, physical, and administrative energy required simply to move through a society not designed for differently-abled bodies.

1. The "Time Tax" and Crip Time: A Compounded Deficit

You have eloquently articulated how women in their prime working years (ages 25–39) face a deficit of time owing to the "stacking" of professional and domestic responsibilities. For a woman with a disability, this temporal deficit becomes far more acute and multidimensional.

Consider the following invisible labour burdens:

Administrative and Bureaucratic Labour. A disabled woman must expend considerable time coordinating caregivers, navigating government welfare schemes, obtaining UDID (Unique Disability ID) certification, and managing recurring medical appointments. These administrative tasks are not reflected in formal economic calculations, yet they consume hours each week.

Navigation Labour. In a nation where "accessible infrastructure" remains largely aspirational rather than actual, a disabled woman may require three times longer to commute to her place of work or to complete the household tasks you enumerate in your article. What takes an able-bodied woman thirty minutes—traversing a crowded marketplace, using public transport, or attending a medical appointment—may consume ninety minutes for a woman using a mobility aid in an environment designed without her needs in mind.

Emotional Labour. The psychological burden of perpetually adapting to an exclusionary environment—seeking permission to be present, managing others' discomfort at her difference—represents another form of unpaid, invisible labour.

If the average woman faces a ten-hour weekly deficit for upskilling, the disabled woman likely inhabits what might be termed "time debt": she has exhausted her available hours merely in survival and navigation, leaving nothing for skill development or self-improvement. She is not merely "time poor"; she exists in a state of temporal deficit.

2. The Trap of Technoableism: When Technology Becomes the Problem

Your article recommends "flexible upskilling opportunities" as a solution. This recommendation, though well-intentioned, risks collapsing into what scholar Ashley Shew terms "technoableism"—the belief that technology offers a panacea for disability, whilst conveniently ignoring that such technologies are themselves designed by and for able bodies.

The Inaccessibility of "Flexible" Learning. Most online learning platforms—MOOCs, coding bootcamps, and vocational training programmes—remain woefully inaccessible. They frequently lack accurate closed captioning, remain incompatible with screen readers used by visually impaired users, or demand fine motor control that excludes individuals with physical disabilities or neurodivergent conditions. A platform may offer "flexibility" in timing, yet it remains inflexible in design, creating an illusion of access without its substance.

The Burden of Adaptation Falls on the Disabled Person. Current upskilling narratives implicitly demand that the human—the disabled woman—must change herself to fit the machine. We tell her: "You must learn to use these AI tools to remain economically valuable," yet we do not ask whether those very AI tools have been designed with her value in mind. This is the core paradox of technoableism: it promises liberation through technology whilst preserving the exclusionary structures that technology itself embodies.

3. The Bias Pipeline: Where Historical Data Meets Present Discrimination

Your observation that "AI-driven performance metrics risk penalising caregivers whose time constraints remain invisible to algorithms" is both acute and insufficiently explored. Let us examine this with greater precision.

The Hiring Algorithm and the "Employment Gap." Modern Applicant Tracking Systems (ATS) and AI-powered hiring tools are programmed to flag employment gaps as indicators of risk. Consider how these gaps are interpreted differently:

  • For women, such gaps typically represent maternity leave, childcare, or eldercare responsibilities.

  • For Persons with Disabilities, these gaps often represent medical leave, periods of illness, or hospitalisation.

  • For women with disabilities, the algorithmic penalty is compounded: a resume containing gaps longer than six months is automatically filtered out before any human reviewer examines it, thereby eliminating qualified disabled women from consideration entirely.

Research audits have documented this discrimination. In one verified case, hiring algorithms flagged minority candidates disproportionately as needing human review because such candidates—inhibited by systemic bias in how they were evaluated—tended to give shorter responses during video interviews, which the algorithm interpreted as "low engagement".​

Video Interviewing Software and Facial Analysis. Until its removal in January 2021, the video interviewing platform HireVue employed facial analysis to assess candidates' suitability—evaluating eye contact, facial expressions, and speech patterns as proxies for "employability" and honesty. This system exemplified technoableism in its purest form:

  • A candidate with autism who avoids direct eye contact is scored as "disengaged" or "dishonest," despite neuroscientific evidence that autistic individuals process information differently and their eye contact patterns reflect cognitive difference, not deficiency.

  • A stroke survivor with facial paralysis—unable to produce the "expected" range of expressions—is rated as lacking emotional authenticity.

  • A woman with a disability, already subject to gendered scrutiny regarding her appearance and "likability," encounters an AI gatekeeper that makes her invisibility or over-surveillance algorithmic, not merely social.

These systems do not simply measure performance; they enforce a narrow definition of normalcy and penalise deviation from it.

4. Verified Examples: The "Double Glitch" in Action

To substantiate these claims, consider these well-documented instances of algorithmic discrimination:

Speech Recognition and Dysarthria. Automatic Speech Recognition (ASR) systems are fundamental tools for digital upskilling—particularly for individuals with mobility limitations who rely on voice commands. Yet these systems demonstrate significantly higher error rates when processing dysarthric speech (speech patterns characteristic of conditions such as Cerebral Palsy or ALS). Recent research quantifies this disparity:

  • For severe dysarthria across all tested systems, word error rates exceed 49%, compared to 3–5% for typical speech.​

  • Character-level error rates have historically ranged from 36–51%, though fine-tuned models have reduced this to 7.3%.​

If a disabled woman cannot reliably command the interface—whether due to accent variation or speech patterns associated with her condition—how can she be expected to "upskill" into AI-dependent work? The platform itself becomes a barrier.

Facial Recognition and the Intersection of Race and Gender. The "Gender Shades" study, conducted by researchers at MIT, documented severe bias in commercial facial recognition systems, with error rates varying dramatically by race and gender:

  • Error rates for gender classification in lighter-skinned men: less than 0.8%

  • Error rates for gender classification in darker-skinned women: 20.8% to 34.7%​

Amazon Rekognition similarly misclassified 31 percent of darker-skinned women. For a disabled woman of colour seeking employment or accessing digital services, facial recognition systems compound her marginalisation: she is simultaneously rendered invisible (failed detection) or hyper-surveilled (flagged as suspicious).​

The Absence of Disability-Disaggregated Data. Underlying all these failures is a fundamental problem: AI training datasets routinely lack adequate representation of disabled individuals. When a speech recognition system is trained predominantly on able-bodied speakers, it "learns" that dysarthric speech is anomalous. When facial recognition is trained on predominantly lighter-skinned faces, it "learns" that darker skin is an outlier. Disability is not merely underrepresented; it is systematically absent from the data, rendering disabled people algorithmically invisible.

5. Toward Inclusive Policy: Dismantling the Bias Pipeline

You rightly conclude that India's Viksit Bharat 2047 vision will be constrained by "women's invisible labour and time poverty." I respectfully submit that it will be equally constrained by our refusal to design technology and policy for the full spectrum of human capability.

True empowerment cannot mean simply "adding jobs," as your article notes. Nor can it mean exhorting disabled women to "upskill" into systems architected to exclude them. Rather, it requires three concrete interventions:

First, Inclusive Data Collection. Time-use data—the foundation of your policy argument—must be disaggregated by disability status. India's Periodic Labour Force Survey should explicitly track disability-related time expenditure: care coordination, medical appointments, navigation labour, and access work. Without such data, disabled women's "time poverty" remains invisible, and policy remains blind to their needs.

Second, Accessibility by Design, Not Retrofit. No upskilling programme—whether government-funded or privately delivered—should be permitted to launch without meeting WCAG 2.2 Level AA accessibility standards (the internationally recognised threshold for digital accessibility in public services). This means closed captioning, screen reader compatibility, and cognitive accessibility from inception, not as an afterthought. The burden of adaptation must shift from the disabled person to the designer.​

Third, Mandatory Algorithmic Audits for Intersectional Bias. Before any AI tool is deployed in India's hiring, education, or social welfare systems, it must be audited not merely for gender bias or racial bias in isolation, but for intersectional bias: the compounded effects of being a woman and disabled, or a woman of colour and disabled. Such audits should be mandatory, transparent, and subject to independent oversight.

Conclusion: A Truly Viksit Bharat

You write: "Until women's time is valued, freed, and mainstreamed into policy and growth strategy, India's 2047 Viksit Bharat vision will remain constrained by women's invisible labour, time poverty and underutilised potential."

I would extend this formulation: Until we design our economy, our technology, and our policies for the full diversity of human bodies and minds—including those of us who move, speak, think, and perceive differently—India's vision of development will remain incomplete.

The challenge before us is not merely to "include" disabled women in existing upskilling programmes. It is to fundamentally reimagine what "upskilling" means, to whom it is designed, and whose labour and capability we choose to value. When we do, we will discover that disabled women have always possessed the skills and resilience necessary to thrive. Our task is simply to remove the barriers we have constructed.

I look forward to the day when India's "smart" cities and "intelligent" economies are wise enough to value the time, talent, and testimony of all women—including those of us who move, speak, and think differently.

Yours faithfully,

Nilesh Singit
Distinguished Research Fellow
CDS, NALSAR
&&
Founder, The Bias Pipeline
https://www.nileshsingit.org/

Friday, 26 December 2025

Prototype — Accessible to Whom? Legible to What?

 

Abstract

Artificial Intelligence (AI) has transformed the terrain of possibility for assistive technology and inclusive design, but continues to perpetuate complex forms of exclusion rooted in legibility, bias, and tokenism. This paper critiques current paradigms of AI prototyping that centre “legibility to machines” over accessibility for disabled persons, arguing for a radical disability-led approach. Drawing on international law, empirical studies, and design scholarship, the analysis demonstrates why prototyping is neither neutral nor technical, but a deeply social and political process. Building from case studies in recruiting, education, and healthcare technology failures, this work exposes structural biases in training, design, and implementation—challenging designers and policymakers to move from “designing for” and “designing with” to “designing from” disability and difference.

Introduction

Prototyping is celebrated in engineering and design as a space for creativity, optimism, and risk-taking—a laboratory for the future. Yet, for countless disabled persons, the prototype is also where inclusion begins… or ends. For them, optimism is often tempered by the unspoken reality that exclusion most often arrives early and quietly, disguised as technical “constraints,” market “priorities,” or supposedly “objective” code. When prototyping occurs, it rarely asks: accessible to whom, legible to what?

This question—so simple, so foundational—is what this paper interrogates. The rise of Artificial Intelligence has intensified the stakes because AI prototypes increasingly determine who is rendered visible and included in society’s privileges. Legibility, not merely accessibility, is becoming the deciding filter; if one’s body, voice, or expression cannot be rendered into a dataset “comprehensible” to AI, one may not exist in the eyes of the system. Thus, we confront a new and urgent precipice: machinic inclusion, machinic exclusion.

This work expands the ideas presented in recent disability rights speeches and debates, critically interrogating how inclusive design must transform both theory and practice in the age of AI. It re-interprets accessibility as a form of knowledge and participation—never a technical afterthought.

Accessibility as Relational, Not Technical

Contemporary disability studies and the lived experiences of activists reject the notion that accessibility is a mere checklist or add-on. Aimi Hamraie suggests that “accessibility is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.”1 Just as building a ramp after a staircase is an act of remediation rather than inclusion, most AI prototyping seeks to retrofit accessibility, arguing it is too late, too difficult, or too expensive to embed inclusiveness from the outset.

Crucially, these arguments reflect broader epistemologies: those who possess the power to design, define the terms of recognition. Accessibility is not simply about “opening the door after the fact,” but questioning why the door was placed in an inaccessible position to begin with.

This critique leads us to re-examine prototyping practices through a disability lens, asking not only “who benefits” but also “who is recognised.” Evidence throughout the AI industry reveals a persistent confusion between accessibility for disabled persons and legibility for machines, a theme critically examined in subsequent sections.

Legibility and the Algorithmic Gaze

Legibility, distinct from accessibility, refers to the capacity of a system to recognise, process, and make sense of a body, voice, or action. Within the context of AI, non-legible phenomena—those outside dominant training data—simply vanish. People with non-standard gait, speech, or facial expressions are “read” by the algorithm as errors or outliers.

What are the implications of placing legibility before accessibility?

Speech-recognition models routinely misinterpret dysarthric voices, excluding those with neurological disabilities. Facial recognition algorithms have misclassified disabled expressions as “threats” or “system errors,” because their datasets contain few, if any, disabled exemplars. In the workplace, résumé-screening AI flags gaps or “unusual experience,” disproportionately rejecting those with disability-induced employment breaks. In education, proctoring platforms flag blind students for “cheating”, unable to process their lack of eye gaze at the screen as a legitimate variance.

These failures do not arise from random error. They are products of a pipeline formed by unconscious value choices made at every stage: training, selection, who participates, and who is imagined as the “user.”

In effect, machinic inclusiveness transforms the ancient bureaucracy of bias from paper to silicon. The new filter is not the form but the invisible code.

The Bias Pipeline: What Goes In, Comes Out Biased

Bias in AI does not merely appear at the end of the process; it is present at every decision point. One stark experiment submitted pairs of otherwise identical résumés to recruitment-screening platforms: one indicated a “Disability Leadership Award” or advocacy involvement, the other did not. The algorithm ranked the “non-disability” version higher, asserting that highlighting disability meant “reduced leadership emphasis,” “focus diverted from core job responsibilities,” or “potential risk.”

This is not insignificant. Empirical studies have reproduced such results across tech, finance, and education, showing systemic discrimination by design. Qualified disabled applicants are penalised for skills, achievements, and community roles that are undervalued or alien to training data.

Much as ethnographic research illuminated the “audit culture” in public welfare (where bureaucracy performed compliance rather than delivered services), so too does “audit theatre” manifest in AI. Firms invite disabled people to validate accessibility only after the design is final. In true co-design, disabled persons must participate from inception, defining criteria and metrics on equal footing. This gap—between performance and participation—is the site where bias flourishes.

The Trap of Tokenism

Tokenism is an insidious and common problem in social design. In disability inclusion, it refers to the symbolic engagement of disabled persons for validation, branding, or optics—rather than for genuine collaboration.

Audit theatre, in AI, occurs when disabled people are surveyed, “consulted,” or reviewed, but not invited into the process of design or prototyping. The UK’s National Disability Survey was struck down for failing to meaningfully involve stakeholders. Even the European Union’s AI Act, lauded globally for progressive accessibility clauses, risks tokenism by mandating involvement but failing to embed robust enforcement mechanisms.

Most AI developers receive little or no formal training in accessibility. When disability emerges in their worldview, it is cast in terms of medical correction—not lived expertise. Real participation remains rare.

Tokenism has cascading effects: it perpetuates design choices rooted in non-disabled experience, licenses shallow metrics, and closes the feedback loop on real inclusion.

Case Studies: Real-World Failures in Algorithmic Accessibility

AI Hiring Platforms and the “Disability Penalty”

Automated CV-screening tools systematically rank curricula vitae containing disability-associated terms lower, even when qualifications are otherwise stronger. Companies like Amazon famously scrapped AI recruitment platforms after discovering they penalised women, but similar audits for disability bias are scarce. Companies using video interview platforms have reported that candidates with stroke, autism, or other disability-related facial expressions score lower due to misinterpretation.

Online Proctoring and Educational Technology in India

During the COVID-19 pandemic, the acceleration of edtech platforms in India promised transformation. Yet, blind and low-vision students were flagged as “cheating” for not making “required” eye contact with their devices. Zoom and Google Meet upgraded accessibility features, but failed to address core gaps in their proctoring models.

Reports from university students showed that requests for alternative assessments or digital accommodations were often denied on the grounds of technical infeasibility.

Healthcare Algorithms and Diagnostic Bias

Diagnostic risk scores and triaging algorithms trained on narrow datasets exclude non-normative disability profiles. Health outcomes for persons with rare, chronic, or atypical disabilities are mischaracterised, and recommended interventions are mismatched.

Each failure traces back to inaccessible prototyping.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype. Disability-led design reframes accessibility—not as a requirement for “special” needs but as expertise that enriches technology. It asks not “How can you be fixed?” but “What knowledge does your experience bring to designing the machine?”

Major initiatives are emerging. Google’s Project Euphonia enlists disabled participants to re-train speech models for atypical voices, but raises ethical debates on data ownership, exploitation, and who benefits. More authentic still are community-led mapping projects where disabled coders and users co-create AI mapping tools for urban navigation, workspace accessibility, and independent living. These collaborations move slowly but produce lasting change.

When accessibility is led by disabled persons, reciprocity flourishes: machine and user learn from each other, not simply predict and consume.

Sara Hendren argues, “design is not a solution, it is an invitation.” Where disability leads, the invitation becomes mutual—technology contorts to better fit lives, not the reverse.

Policy, Law, and Regulatory Gaps

The European Union’s AI Act is rightly lauded for Article 16 (mandating accessibility for high-risk AI systems) and Article 5 (forbidding exploitation of disability-related vulnerabilities), as well as public consultation. Yet, the law lacks actionable requirements for collecting disability-representative data—and overlooks the intersection of accessibility, data ownership, and research ethics.

India’s National Strategy for Artificial Intelligence, along with “AI for Inclusive Societal Development,” claims “AI for All” but omits specific protections, data models, or actionable recommendations for disabled persons—this despite the Supreme Court’s Rajiv Raturi judgment upholding accessibility as a fundamental right. Implementation of the Rights of Persons with Disabilities Act, 2016, remains loose, and enforcement is sporadic.

The United States’ ADA and Section 508 have clearer language, but encounter their own enforcement challenges and retrofitting headaches.

Ultimately, policy remains disconnected from practice. Prototyping and design must close the gap—making legal theory and real inclusiveness reciprocal.

Intersectionality: Legibility Across Difference

Disability is never experienced in isolation: it intersects with gender, caste, race, age, and class. Women with disabilities face compounded discrimination in hiring, healthcare, and data representation. Caste-based exclusions are rarely coded into AI training practices, creating models that serve only dominant groups.

For example, the exclusion of vernacular languages in text-to-speech software leaves vast rural disabled communities voiceless in both policy and practical tech offerings. Ongoing work by Indian activists and community innovators seeks to produce systems and data resources that represent the full spectrum of disabled lives, but faces resistance from resource constraints, commercial priorities, and a lack of institutional support.

Rethinking the Fundamentals: Prototyping as Epistemic Justice

Epistemic justice—ensuring that all knowledge, experience, and ways of living are valued in the design of social and technical systems—is both a theoretical and a practical necessity in AI. Bias springs not only from bad data or oversight but by failing to recognise disabled lives as valid sources of expertise.

Key steps for epistemic justice in prototyping include:

  • Centre disabled expertise from project inception, defining metrics, incentives, and feedback loops.

  • Use disability as a source of innovation, not just compliance: leverage universal design to produce systems more robust for all users.

  • Address intersectionality in datasets, training and testing for compounded bias across race, gender, language, and class.

  • Create rights-based governance in tech companies, embedding accessibility into KPIs and public review.

Recommendations: Designing From Disability

The future of inclusive AI depends on three principal shifts:

  1. From designing for to designing with: genuine co-design, not audit theatre, where disabled participants shape technology at every stage.

  2. From accessibility as compliance to accessibility as knowledge: training developers, engineers and policymakers to value lived disability experience.

  3. From compliance to creativity: treating disability as “design difference”—a starting point for innovation, not merely a deficit.

International law and national policy must recognise the lived expertise of disability communities. Without this, accessibility remains a perpetual afterthought to legibility.


Conclusion

Accessible to whom, legible to what? This question reverberates through every level of prototype, product, and policy.

If accessibility is left to the end, if legibility for machines becomes the touchstone, humanity is reduced, difference ignored. When disability leads the design journey, technology is not just machine-readable; it becomes human-compatible.

The future is not just about teaching machines to read disabled lives—but about allowing disabled lives to rewrite what machines can understand.


References

  • Aimi Hamraie, Building Access: Universal Design and the Politics of Disability (University of Minnesota Press, 2017).

  • Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning.” fairmlbook.org, 2019.

  • Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1–15.

  • Leavy, Siobhan, Eugenia Siapera, Bethany Fernandez, and Kai Zhang. “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models.” Proceedings of the 2024 ACM FAccT.

  • Sara Hendren. What Can a Body Do? How We Meet the Built World (Riverhead, 2020).

  • National Strategy for Artificial Intelligence, NITI Aayog, Government of India, 2018.

  • Rajiv Raturi v. Union of India, Supreme Court of India, AIR 2012 SC 651.

  • European Parliament and Council, Artificial Intelligence Act, 2023.

  • Google AI Blog. “Project Euphonia: Helping People with Speech Impairments.” May 2019.

  • “Making AI Work for Everyone,” Google Developers, 2022.

  • Amazon Inc., “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.

  • United Kingdom High Court, National Disability Survey ruling, 2023.

  • Nita Ahuja, “Online Proctoring as Algorithmic Injustice: Blind Students in Indian EdTech,” Journal of Disability Studies, vol. 12, no. 2 (2022): 151-177.

  • United Nations, Convention on the Rights of Persons with Disabilities, Resolution 61/106 (2006).

  • [Additional references on intersectionality, design theory, empirical studies, Indian law, US/EU regulation, and case material]

Thursday, 25 December 2025

Technoableism and the Bias Pipeline: How Ableist Ideology Becomes Algorithmic Exclusion

Abstract

Artificial Intelligence systems, increasingly deployed across healthcare, employment, and education, encode and amplify technoableism—the ideology that frames disability as a problem requiring technological elimination rather than a matter of civil rights. This article maps how ableist assumptions travel through the AI development pipeline, transforming systemic prejudice into automated exclusion. Drawing upon disability studies scholarship, empirical research on algorithmic bias, and the legal frameworks established under India's Rights of Persons with Disabilities Act 2016 and the United Nations Convention on the Rights of Persons with Disabilities, this investigation demonstrates that bias in AI is not merely technical error but ideological infrastructure. Each stage of the pipeline—from data collection to model evaluation—translates assumptions of normative ability into measurable harm: voice recognition systems fail users with speech disabilities, hiring algorithms discriminate against disabled candidates, and large language models reproduce cultural ableism. Addressing these failures requires not technical debugging alone but structural transformation: mandatory accessibility standards, disability-led participatory design, equity-based evaluation frameworks, and regulatory alignment with the Rajiv Raturi Supreme Court judgment, which established accessibility as an ex-ante duty and fundamental right rather than discretionary accommodation.

Section I: The Ideological Architecture of Digital Exclusion

The integration of Artificial Intelligence into core societal systems—healthcare, hiring, education, and governance—demands rigorous examination of the ideologies governing its design. Bias in AI is not an incidental technical glitch but a societal failure rooted in entrenched prejudices. For persons with disabilities, these biases stem from an ideology termed technoableism, which translates historical and systemic ableism into algorithmic exclusion. Understanding this ideological foundation is essential to addressing structural inequities embedded across the AI development lifecycle.

1.1 Defining Ableism in the Digital Age: From Social Model to Algorithmic Harm

Ableism constitutes discrimination that favours non-disabled persons and operates systematically against disabled persons. This bias structures societal expectations regarding what constitutes "proper" functioning of bodies and minds, profoundly shaping technological imagination—the conceptual limits and objectives established during the design process. Consequently, infrastructure surrounding us, from physical environments to digital systems, reflects assumptions of normative ability, determining what is built and who is expected to benefit.

The critique of this system is articulated through frameworks such as crip technoscience, which consciously integrates Critical Disability Studies with Science and Technology Studies. This framework envisions a world wherein disabled persons are recognised as experts regarding their experiences, their bodies, and the material contexts of their lives. Such academic approaches are indispensable for moving beyond medicalised, deficit-based understandings of disability towards recognising systemic, infrastructure-based failures.

1.2 The Core Tenets of Technoableism: Technology as Elimination, Not Empowerment

Technoableism represents a specific, contemporary manifestation of ableism centred on technology. It operates upon the flawed premise that disability is inherently a problem requiring solution, and that emerging technology constitutes the optimal—if not sole—remedy. This perspective embraces technological power to the extent that it considers elimination of disability a moral good towards which society ought to strive.

This ideology aligns closely with technosolutionism, the pervasive tendency to believe that complex social or structural problems can be resolved neatly through technological tools. When applied to disability, this logic reframes disability not as a matter of civil rights or human diversity but as a technical defect awaiting correction. This mindset leads designers to approach disability from a deficit perspective, frequently developing and "throwing technologies at perceived 'problems'" without consulting the affected community. Examples include sophisticated, high-technology ankle prosthetics that prove excessively heavy for certain users, or complex AI-powered live captioning systems that d/Deaf and hard-of-hearing communities never explicitly requested.

A defining feature of technoableism is its frequent presentation "under the guise of empowerment". Technologies are marketed as tools of liberation or assistance, yet their underlying design reinforces normative biases. This rhetorical strategy renders technological solutions benevolent in appearance whilst simultaneously restricting the self-defined needs and agency of disabled individuals. When end-users fail to adopt these unsolicited solutions, developers habitually attribute the failure to users' lack of compliance or inability, rather than interrogating the flawed, deficit-based premise of the technology itself.

Consequently, if technology's ultimate purpose is defined—implicitly or explicitly—as solving or eliminating disability, then any disabled person whose condition resists neat technological resolution becomes an undesirable system anomaly. This ideological premise grants developers a form of moral licence to exclude non-normative data during development, rationalising the failure to accommodate as a functional requirement necessary for the system's "proper" operation.


Section II: Encoding Ableism: Technoableism Across the AI Bias Pipeline

The transformation of technoableist ideology into measurable, systemic bias occurs along the standard AI development lifecycle, commonly termed the Bias Pipeline. At each stage—from initial data selection to final model evaluation—assumptions of normative ability are translated into computational limitations, producing predictable patterns of exclusion.

2.1 The Architecture of Exclusion: Inheriting Historical Bias

The foundational issue in AI development lies in Assumptions of Normalcy. Technological advancements throughout history, from Industrial Revolution machinery to early computing interfaces, have consistently prioritised the needs and experiences of able-bodied users. This historical context ensures that AI development inherits Historical Bias.. This design bias is pervasive, frequently unconscious, and centres the able-bodied user as the "default".

This centring produces the One-Size-Fits-All Fallacy, wherein developers create products lacking the flexibility and customisable options necessary to accommodate diverse human abilities and preferences. Designing standard keyboards without considering individuals with limited dexterity exemplifies this bias.

2.2 Stage 1: Data Collection and Selection Bias

Bias manifests most overtly at the data collection stage. If data employed to train an AI algorithm is not diverse or representative of real-world populations, the resulting outputs shall inevitably reflect those biases. In the disability context, this manifests as profound exclusion of non-normative inputs.

AI models are trained on large pre-existing datasets that statistically emphasise the majority—the normative population. Data required for systems to recognise or translate inputs from disabled individuals is therefore frequently statistically "outlying". A primary illustration is the performance failure of voice recognition software. These systems routinely struggle to process speech disorders because training data lacks sufficient input from populations with conditions such as amyotrophic lateral sclerosis, cerebral palsy, or other speech impairments[79]. This deliberate or accidental omission of diverse inputs constitutes textbook Selection Bias.

2.3 Stage 2: Data Labelling and Measurement Bias

As datasets are curated and labelled, human subjectivity—or cognitive bias—can permeate the system[56]. This stage is where the ideological requirement for speed and efficiency, deeply embedded in technoableist culture, is encoded as technical constraint.

A particularly harmful example of this systemic ableism is observed in digital employment platforms. Certain systems reject disabled digital workers, such as those engaged on platforms like Amazon Mechanical Turk, because their work speed is judged "below average". Speed is frequently employed as a metric to filter spammers or low-quality workers, but in this context it becomes a discriminatory measure. This failure demonstrates Measurement Bias, wherein performance metrics systematically undervalue contributions falling outside arbitrary, non-disabled performance standards.

The resources required to build and maintain AI systems at scale contribute significantly to this exclusion. Integrating highly specialised, diverse data—such as thousands of voice recordings representing the full spectrum of speech disorders—is substantially more resource-intensive than training on statistically homogenous datasets. Consequently, Selection Bias is frequently driven by economic calculation, prioritising profitable, normative user bases and thereby financially justifying the marginalisation of smaller, diverse populations.

2.4 Stage 3 and 4: Model Training, Evaluation, and Stereotyping Bias

Once trained on imbalanced, non-representative data, AI models exhibit Confirmation Bias, reinforcing historical prejudices by over-relying on established, ableist patterns present in input data. Furthermore, biases can emerge even when models appear unbiased during training, particularly when deployed in complex real-world applications.

The final pipeline stage, model evaluation, is itself susceptible to Evaluation Bias. Benchmarks employed to test performance and "fairness" frequently contribute to bias because they fail to capture the nuances of disability. Current methodologies are incomplete, often focusing exclusively on explicit forms of bias or narrow, specific disability groups, thereby failing to assess the full spectrum of subtle algorithmic harm. This evaluation deficit leads to Out-Group Homogeneity Bias, causing AI systems to generalise individuals from underrepresented disability communities, treating them as more similar than they are and failing to recognise the intersectionality and diversity of disabled experiences.

This systemic failure to account for human variation highlights how ableism functions as an intersectional multiplier of harm. Commercial facial recognition systems, for instance, have error rates as low as 0.8 per cent for light-skinned males, yet these rates soar to 34.7 per cent for dark-skinned females. When disability is added to this equation, data deficits compound exclusion, leading to disproportionately higher failure rates for individuals with multiple marginalised identities, as alleged in the Workday lawsuit regarding discrimination based on disability, age, and race.

The following table summarises how technoableist ideology translates into specific algorithmic errors across the development process:

AI Pipeline Stage Technoableist Assumption/Ideology Resulting AI Bias Type Example of Exclusionary Impact
Data Collection The "ideal user" has standardised, normative physical and cognitive inputs. Selection Bias/Historical Bias Voice datasets exclude speech disorders; computer vision data lacks atypical bodies
Data Labelling/Metrics Efficiency, speed, and standard output quality are universally valued. Measurement Bias/Human Decision Bias Hiring systems reject candidates whose speed is below average; annotators inject stereotypical labels
Model Training/Output Optimal performance is achieved by minimising deviation from the norm. Confirmation Bias/Stereotyping Bias Large language models reproduce culturally biased and judgemental assumptions about disability.

Section III: Manifestation of Bias: Documenting Algorithmic Ableism

The biases encoded in the AI pipeline produce tangible harms for persons with disabilities in everyday digital interactions. These manifestations illustrate how technoableism moves beyond abstract theory to create concrete systemic barriers.

3.1 The Voice Recognition Failure: Algorithmic Erasure of Non-Normative Speech

Perhaps the most salient failure of ableist design is the performance of Automated Speech Recognition technologies. ASR systems routinely struggle to recognise voices of persons with conditions such as amyotrophic lateral sclerosis or cerebral palsy. For users who rely upon voice commands for digital interaction, mobility control, or communication, this failure amounts to complete exclusion from the technological sphere.

Whilst machine learning algorithms have demonstrated high accuracy in detecting the presence of voice disorders in research settings, none have achieved sufficient reliability for robust clinical deployment. This discrepancy arises because research frequently lacks standardised acoustic features and processing algorithms and, critically, datasets employed are not sufficiently generalised for target populations.

This technological failure is a direct consequence of Selection Bias and reinforces profound systemic harm: the technology, ostensibly designed to assist, refuses to acknowledge the user. This algorithmic denial of agency transforms users into objects of data analysis—voice pathologies to be studied—rather than subjects of digital interaction, reinforcing the technoableist view of disabled bodies as inherently flawed and outside the system's operational boundaries.

3.2 Computer Vision and the Perpetuation of Stereotypes

Computer vision systems and generative AI models routinely fail disabled users by reinforcing existing stereotypes and failing to recognise atypical visual inputs. Research indicates that cognitive differences, such as those associated with autism, may involve reduced recognition of perceptually homogenous objects, including faces. AI models trained on normative facial recognition datasets reflect and sometimes exacerbate these difficulties for individuals with atypical facial features or expressions.

Furthermore, generative AI systems—text-to-image or large language models—perpetuate harmful social tropes. Outputs from these systems frequently depict disabled persons with stereotypical accessories (for instance, blind persons shown exclusively wearing dark glasses) or inaccurately portray accessible technologies in unrealistic manners. These systematic biases restrict how persons with disabilities are visually and textually represented in the digital sphere, preventing nuanced understanding of disabled life and reinforcing societal pressure for disability to conform to limited, stereotypical visual signifiers.

3.3 Natural Language Processing and Cognitive Bias in Large Language Models

Natural Language Processing algorithms, which power smart assistants and autocorrect systems, harbour significant implicit bias against persons with disabilities[79]. Researchers have found these biases pervasive across highly utilised, public pretrained language models.

When asked to explain concepts related to disability, large language models frequently provide output that is clinical, judgemental, and founded upon underlying assumptions, rather than offering educational or supportive explanations. This judgemental tone further restricts digital agency, treating users as pathological entities rather than knowledgeable participants.

Moreover, these biases are highly sensitive to cultural context. Studies on Indian language models demonstrated that these models consistently underrated harm caused by ableist statements. By reproducing local cultural biases—such as tolerance for comments linking weight loss to resolution of pain and weakness—the systems misinterpret and overlook genuinely ableist comments. This lack of cross-cultural understanding and contextual nuance demonstrates fundamental failure of generalisation in AI and willingness to integrate and scale pre-existing cultural prejudices.

The collective failure of biased machine learning algorithms to operate reliably in clinical or educational settings carries profound risk. If these flawed models are deployed in high-stakes environments—such as healthcare diagnostics or educational tutoring systems—systemic biases from training data shall directly compromise equity, potentially leading to inaccurate medical evaluations or inadequate educational support for disabled patients and students whose data points were overlooked or excluded.

3.4 High-Stakes Discrimination: Employment and Digital Mobility

The deployment of biased AI directly facilitates socioeconomic marginalisation. AI applicant screening systems have been subject to lawsuits alleging discrimination based on disability alongside race and age, demonstrating how automated systems function as gatekeepers to employment opportunity.

Beyond formal hiring, digital labour platforms actively exclude disabled users. As previously noted, rejection of disabled clickworkers because their performance falls outside normative speed metrics reveals a crucial systemic problem. When platforms impose rigid standards, AI enforces competitive, ableist standards of productivity, creating direct economic marginalisation and barring disabled persons from participating fully in the digital economy.


Section IV: Pathways to Equitable AI: Centring Disability Expertise

To move beyond the limitations of technoableism, AI development must undergo fundamental ideological and methodological transformation, prioritising disability expertise, participatory governance, and equity-based standards.

4.1 The Paradigm Shift: From Deficit-Based to Asset-Based Design

The core of technoableism is the deficit-based approach, which frames disability as a flaw requiring correction. Mitigating this requires complete shift towards asset-based design, wherein technology is developed not to eliminate disability but to enhance capability and inclusion.

This approach mandates recognising that persons with disabilities possess unique, frequently ignored expertise regarding technological interactions and system failures. By prioritising these strengths and lived experiences, developers can create technologies that are genuinely useful and non-technoableist by design. The design process must acknowledge that technology's failure to accommodate a user constitutes failure of the design itself, not failure of the user's body or mind.

4.2 Participatory Design and Governance: The Mandate of "Nothing About Us Without Us"

The fundamental guiding principle for ethical and accessible technology development must be "Nothing About Us Without Us". This commitment requires that disabled community members be included as active partners and decision-makers at every stage of the development process—from initial conceptualisation to final testing and deployment. Development must be premised on interdependence, rejecting the technoableist ideal of total individual technological independence in favour of systems that value mutual support and varied needs.

Inclusion efforts must extend beyond user experience research aimed at maximising competitive advantage. They require maintaining transparency and building genuine trust with the community. Accessibility must be built in as default design principle, rather than treated as remedial, post-hoc checklist requirement for regulatory compliance.

4.3 Standardising Inclusion: Integrating Universal Design and Web Content Accessibility Guidelines Principles

To codify these ethical commitments, AI systems must adhere to rigorous, internationally recognised accessibility standards. The Web Content Accessibility Guidelines 2.2 provide essential technical baseline for AI development. WCAG structures accessibility around four core principles, ensuring that AI content and interfaces are:

1. Perceivable: Information must be presentable in ways all users can perceive, requiring features such as alternative text, captions, and proper colour contrast.

2. Operable: Interface components must be navigable and usable, benefitting users who rely upon keyboard navigation, voice control, or switch devices.

3. Understandable: Information and operation must be comprehensible, mitigating cognitive load through simple, clear language and predictable behaviour.

4. Robust: Content must be interpretable by various user agents and assistive technologies as technology advances, ensuring long-term usability.

Complementing WCAG are the seven principles of Universal Design, which offer broader, holistic framework[56]. Principles such as Equitable Use (designs helpful for diverse abilities) and Tolerance for Error (minimising hazards and adverse consequences) ensure that AI systems accommodate wide ranges of individual preferences and abilities.

Whilst technical standards such as WCAG are vital, progression towards equity requires adoption of equity-based accessibility standards. These standards move beyond technical compliance to actively recognise intersectionality and expertise. This is critical because failure rates are higher for multiply marginalised users. An ethical design strategy must mandate measuring not merely whether technology is accessible, but how equitably it performs across diverse user groups—for instance, measuring accuracy of speech recognition systems for non-normative voices speaking marginalised dialects.

This pursuit of equitable performance requires fundamental re-evaluation of performance metrics. Traditional metrics, such as generalised accuracy or average speed, are inherently biased towards normative performance. New frameworks, such as AccessEval, are necessary to systematically assess disability bias in large language models and other AI systems. These evaluation systems must prioritise measuring absence of social harm and equitable functioning across diverse user groups, rather than optimising marginal gains in generalised population efficiency.

The following table summarises how established design frameworks apply to ethical AI development:

Framework Principle Relevance to AI Ethics and Bias Mitigation
Universal Design Equitable Use Ensuring AI benefits diverse abilities and does not exclude or stigmatise any user group.
Universal Design Flexibility Accommodating user preferences by offering customisable AI interaction methods (for example, input/output modalities).
WCAG 2.2 Perceivable Guaranteeing AI outputs (for example, data visualisations, text, audio) can be consumed by all users, including through screen readers and captions.
WCAG 2.2 Operable Ensuring control mechanisms (for example, prompts, interfaces) can be reliably navigated using keyboard, voice, or switch inputs.
WCAG 2.2 Understandable Designing AI behaviour and outputs to be comprehensible, simple, and clear, mitigating cognitive bias and confusion.
WCAG 2.2 Robust Building systems compatible with existing and future assistive technologies, ensuring long-term accessibility and preventing technological obsolescence as a barrier.

Section V: Conclusion and Recommendations for an Accessible Future

5.1 The Ethical Imperative: Recognising Technoableism as Structural Policy Failure

The analysis demonstrates unequivocally that bias in AI is the scaled, automated extension of technoableism. This pervasive ideology institutionalises the historical exclusion of disabled persons by embedding normative assumptions into computational mechanisms of the AI pipeline. The resultant harms—from voice recognition failures to algorithmic hiring discrimination and propagation of stereotypes—are systematic, not incidental. Addressing this issue demands more than technical debugging; it requires confrontational re-evaluation of foundational ideologies governing design.

In the Indian context, this requirement takes on constitutional urgency. The Supreme Court's landmark judgment in Rajive Raturi v. Union of India established accessibility as an ex-ante duty and fundamental right, holding that Rule 15 of the Rights of Persons with Disabilities Rules 2017 was ultra vires the parent Act because it provided only aspirational guidelines rather than enforceable standards. The Court directed the Union Government to frame mandatory accessibility rules within three months, stating unequivocally that "accessibility is not merely a convenience, but a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally".

This judgment must serve as the foundation for India's AI governance framework. If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare, and civic participation—remain governed by existing, non-specific laws. As the Raturi Court observed, accessibility requires a two-pronged approach: retrofitting existing institutions whilst transforming new infrastructure and future initiatives. AI governance must adopt precisely this logic.

Ultimately, true inclusion requires commitment to systemic change, replacing technoableist fixation on technological independence with principle of human interdependence as core foundation of design.

5.2 Policy, Practice, and Research Recommendations

Based on systemic failures and the necessity for paradigm shift towards asset-based, participatory design, the following recommendations are essential for achieving equitable AI development:

1. Policy Mandates for Data Equity and Validation:

Regulatory bodies must mandate comprehensive data collection protocols, specifically requiring inclusion of non-normative inputs and validation data from the full spectrum of disability communities[79]. This includes requiring highly specialised, diverse validation sets for systems such as ASR to ensure reliability in high-stakes clinical and professional environments. In light of the Raturi judgment, these mandates must be framed not as aspirational guidelines but as enforceable minimum standards.

2. Regulatory Oversight and Mandatory Impact Assessments:

Governments and regulatory bodies must institute mandatory, independent accessibility and bias audits for all high-stakes AI systems (for example, those employed in hiring, housing, healthcare, and education). These audits must be conducted by disabled experts and ensure adherence to WCAG and Universal Design principles throughout the entire development lifecycle, thereby enforcing the "Nothing About Us Without Us" principle. The European Union's Artificial Intelligence Act 2024 provides a model: Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities, whilst Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.

3. Adoption of Equitable Evaluation Metrics:

Developers and auditors must move beyond traditional accuracy and efficiency metrics, which favour normative performance. New frameworks such as AccessEval must be integrated to systematically measure social harm, stereotype reproduction, and equitable functioning of AI across diverse and intersectional user groups. The objective of optimisation must shift from maximising speed to minimising exclusion.

4. Incentivising Asset-Based Participatory Design:

Public and private funding mechanisms ought to be structured to prioritise and financially reward technology development that adheres to genuinely participatory methods. By recognising disabled persons as experts whose unique knowledge accelerates innovation and identifies design failures early, development efforts can move away from unsolicited, deficit-based solutions and build truly inclusive technologies from the ground up.

5. Alignment with Constitutional Mandates:

India's AI governance framework must explicitly align with the Rights of Persons with Disabilities Act 2016, the United Nations Convention on the Rights of Persons with Disabilities, and the Rajive Raturi judgment. NITI Aayog's AI strategy documents must incorporate mandatory accessibility provisions rather than treating disability inclusion as sectoral afterthought. As the Raturi Court emphasised, the State's duty to accessibility is ex-ante and proactive, not dependent upon individual requests. AI policy must embed this principle from inception.

6. Cross-Cultural Competence in AI Systems:

Research demonstrates that AI models fail to recognise ableism across cultural contexts, with Western models overestimating harm whilst Indic models underestimate it. Indian AI governance must mandate cultural competence testing for systems deployed in India, ensuring that models understand how ableism manifests within Indian social structures, including intersections with caste, gender, and class. Training datasets must include representation from Indian disabled communities, and evaluation frameworks must account for culturally specific manifestations of bias.

The conversation about AI in India cannot proceed as though disability is a niche concern or an optional consideration. With 2.74 crore Indians with disabilities—comprising diverse impairment categories across urban and rural contexts, across caste and class divides—the deployment of biased AI systems shall entrench existing inequalities at unprecedented scale. The Raturi judgment has established the floor; AI policy must now build the ceiling. Accessibility here is not afterthought; it is integral architecture. When disability leads, AI learns to listen.


References