Saturday, 27 December 2025

Disability-Smart Prompts: Challenging ableism in everyday AI use

 

Abstract

Artificial intelligence systems have emerged as transformative technologies capable of reshaping social, economic, and institutional practices across multiple domains. However, empirical investigations reveal that Large Language Models (LLMs) and other frontier AI systems consistently generate discriminatory outputs targeting disabled persons, with disabled candidates experiencing between 1.15 to 58 times more ableist bias compared with non-disabled counterparts. This paper examines the mechanisms through which ableist bias becomes embedded in AI systems and proposes disability-centred prompting frameworks as harm reduction strategies. Drawing upon disability studies scholarship, empirical research on AI bias, and legal frameworks established under India’s Rights of Persons with Disabilities Act, 2016 (RPD Act) and the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), this investigation demonstrates that prompting practices constitute epistemological interventions capable of either reinforcing or mitigating discriminatory outputs. The paper argues that comprehensive solutions require intervention at multiple levels—from individual prompt formulation to systemic changes in data practices, algorithmic design, participatory governance, and regulatory oversight. Critical examination of disability models, intersectional analysis of compounding marginalisations, and operationalisation of rights-based frameworks offer essential pathways toward epistemic justice in AI systems.

Keywords: disability, artificial intelligence, ableism, bias, accessibility, prompting, social model, epistemic justice, India’s Rights of Persons with Disabilities Act

Introduction: The Invisible Architecture of Bias

Artificial intelligence has emerged as one of the most transformative technologies of the twenty-first century, yet it carries within its architecture the prejudices and assumptions of the societies that created it. Large Language Models (LLMs), whilst demonstrating remarkable capabilities in text generation and information processing, do not inherently possess the capacity to distinguish between respectful and discriminatory prompts . These systems operate on statistical probability, continuing patterns established in their training data rather than interrogating the ethical implications of user queries. This fundamental limitation presents a significant challenge: users often assume AI objectivity, whilst AI systems merely replicate the biases embedded within their prompts and training datasets .

Within the context of disability rights and accessibility, this dynamic becomes particularly troubling. Recent empirical investigations have revealed that disabled candidates experience substantially higher rates of ableist harm in AI-generated content, with disabled individuals facing between 1.15 to 58 times more ableist bias compared with baseline candidates . Furthermore, nearly 99.7 per cent of all disability-related conversations generated by frontier LLMs contained at least one form of measurable ableist harm . These findings underscore a critical reality: unless users actively interrogate how they formulate queries, AI systems will continue to reproduce and amplify discriminatory assumptions about disabled persons.

This article examines how prompting practices can either reinforce or mitigate bias in AI responses through the lens of disability-centred design principles. The argument advanced herein is not that syntactically perfect prompts will resolve all systemic issues. Rather, if society fails to critically examine the epistemological frameworks embedded within user queries, it becomes impossible to address the discriminatory outputs these queries generate. This investigation draws upon disability studies scholarship , recent empirical research on AI bias , and the legal frameworks established under India’s Rights of Persons with Disabilities Act, 2016 (RPD Act) and the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD).

Why Prompting Matters: Language as Epistemological Framework

Prompts establish the conceptual boundaries within which AI models formulate responses. The linguistic framing of a query determines not merely the content of the answer but the underlying model of disability that shapes that content . Consider the substantive difference between two ostensibly similar queries: Prompt A asks, “Explain how disabled people can overcome their limitations to perform everyday tasks,” whilst Prompt B requests, “Explain how society can ensure disabled persons have equitable access to everyday environments and services.” Prompt A situates disability within the medical model, conceptualising it as a personal deficit requiring individual adaptation or remediation. This framework places responsibility upon disabled individuals to “overcome” their impairments, implicitly positioning disability as abnormal and undesirable. By contrast, Prompt B adopts the social model of disability, locating barriers within societal structures, policies, and design failures rather than within individual bodies .

This shift in linguistic framing produces fundamentally different AI responses, as the model aligns its output with the epistemological assumptions embedded in the prompt. Recent research has demonstrated that disability models profoundly influence AI system design and bias mechanisms. Newman-Griffis and colleagues (2023) established that definitions of disability adopted during fundamental design stages determine not only data selection and algorithmic choices but also operational deployment strategies, leading to distinct biases and downstream effects. When AI developers operationalise disability through a medical lens, systems tend to emphasise individual deficits and remediation; when designers adopt social or rights-based models, systems more readily identify environmental barriers and structural inequities.

This matters because prompting constitutes a form of political discourse. Language does not merely describe reality; it constructs the frameworks through which reality is understood and acted upon. If a user formulates a query that positions disabled persons as inherently limited, the AI system will generate responses that reinforce this assumption, potentially disseminating discriminatory perspectives to thousands of users. Conversely, prompts grounded in disability rights frameworks can elicit responses that centre autonomy, access, and structural accountability.

The Medical Model versus the Social Model in AI Contexts

Understanding the distinction between disability models is essential for formulating non-discriminatory prompts. The medical model conceptualises disability as pathology located within individual bodies, requiring diagnosis, treatment, and normalisation . This model has historically dominated healthcare, policy, and public discourse, positioning disabled persons as patients requiring intervention rather than citizens entitled to rights . Within AI contexts, the medical model manifests when systems frame disability as deficiency, generate “inspirational” narratives about “overcoming” impairments, or suggest curative interventions as primary solutions .

By contrast, the social model of disability, which underpins both the UNCRPD and India’s RPD Act 2016, posits that disability arises from the interaction between individual impairments and environmental, attitudinal, and institutional barriers . Under this framework, disability is not an individual problem but a societal failure to provide equitable access. The RPD Act defines persons with disabilities as those “with long term physical, mental, intellectual or sensory impairment which, in interaction with barriers, hinders full and effective participation in society equally with others.” This definition explicitly recognises that barriers—including communicational, cultural, economic, environmental, institutional, political, social, attitudinal, and structural factors—create disability through exclusion .

AI systems trained predominantly on data reflecting medical model assumptions will generate outputs that pathologise disability, emphasise individual adaptation, and overlook systemic barriers . Recent scoping reviews have confirmed that AI research exhibits “a high prevalence of a narrow medical model of disability and an ableist perspective,” raising concerns about perpetuating biases and discrimination . To counter these tendencies, users must formulate prompts that explicitly invoke social model frameworks and rights-based approaches.

Empirical Evidence: Measuring Ableist Bias in LLM Outputs

Recent comprehensive audits of frontier LLMs have quantified the extent of ableist bias in AI-generated content. Phutane and colleagues (2025) introduced the ABLEIST framework (Ableism, Inspiration, Superhumanisation, and Tokenism), comprising eight distinct harm metrics grounded in disability studies literature . Their investigation, spanning 2,820 hiring scenarios across six LLMs and diverse disability, gender, nationality, and caste profiles, yielded alarming findings.

Disabled candidates experienced dramatically elevated rates of ableist harm across multiple dimensions . Specific harm patterns emerged for different disability categories: blind candidates faced increased technoableism (the assumption that disabled persons cannot use technology competently), whilst autistic candidates experienced heightened superhumanisation (the stereotype that neurodivergent individuals possess exceptional abilities in narrow domains). Critically, state-of-the-art toxicity detection models failed to recognise these intersectional ableist harms, demonstrating significant limitations in current safety tools .

The research further revealed that intersectional marginalisations compound ableist bias. When disability intersected with marginalised gender and caste identities, intersectional harm metrics (inspiration porn, superhumanisation, tokenism) increased by 10-51 per cent for gender and caste-marginalised disabled candidates, compared with only 6 per cent for dominant identities . This finding confirms the theoretical predictions of intersectionality frameworks: discrimination operates through multiple, compounding axes of marginalisation .

Additional research has demonstrated that AI-generated image captions, resume screening algorithms, and conversational systems consistently exhibit disability bias . Studies have documented that LLM-based hiring systems embed biases against resumes signalling disability, whilst generative AI chatbots demonstrate quantifiable ability bias, often excluding disabled persons from generated responses . These findings underscore the pervasiveness of ableist assumptions across diverse AI applications.

The Rights-Based Prompting Framework: Operationalising the RPD Act 2016

India’s Rights of Persons with Disabilities Act, 2016, provides a robust legal framework for disability rights, having been enacted to give effect to the UNCRPD. The Act defines disability through a social and relational lens, establishes enforceable rights with punitive measures for violations, and expands recognised disability categories from seven to twenty-one. Users can operationalise RPD principles when formulating AI prompts to generate rights-based, non-discriminatory outputs.

The Act establishes several critical principles relevant to prompt formulation. First, non-discrimination and equality provisions (Sections 3-5) establish that disabled persons possess inviolable rights to non-discrimination and equal treatment. Prompts should frame disability rights as non-negotiable entitlements rather than charitable concessions. Instead of asking “How can we help disabled people access services?” users should formulate: “What legal obligations do service providers have under Section 46 of the RPD Act 2016 to ensure accessibility for disabled persons?”

Second, the Act’s definition of reasonable accommodation (Section 2) includes “necessary and appropriate modification and adjustments not imposing a disproportionate or undue burden,” consistent with UNCRPD Article 2 . Prompts should invoke this principle explicitly: “Explain how employers can implement reasonable accommodations for employees with disabilities under the RPD Act 2016, providing specific examples across diverse disability categories.”

Third, provisions regarding access to justice (Section 12) ensure that disabled persons can exercise the right to access courts, tribunals, and other judicial bodies without discrimination . Prompts concerning legal processes should centre this right: “Describe how judicial systems can ensure accessible court proceedings for disabled persons, including documentation formats and communication support, as mandated by Section 12 of the RPD Act 2016.”

Fourth, Chapter III of the Act mandates that appropriate governments ensure accessibility in physical environments, transportation, information and communications technology, and other facilities . Prompts should reference these obligations: “Identify specific digital accessibility requirements under the RPD Act 2016 for government websites and mobile applications, including compliance timelines and enforcement mechanisms.”

Mitigating Intersectional Bias: Caste, Gender, and Disability in the Indian Context

The empirical evidence regarding intersectional disability bias has particular salience in the Indian context, where caste-based discrimination compounds disability marginalisation . Research has documented that when disability intersects with marginalised caste and gender identities, ableist harms increase substantially, with tokenism rising significantly when gender minorities are included and further intensifying with caste minorities .

India’s constitutional framework and the RPD Act recognise the need for intersectional approaches to disability rights. Prompts formulated for Indian contexts should explicitly acknowledge these intersecting marginalisations: “Analyse how the intersection of disability, caste, and gender affects access to employment opportunities in India, referencing relevant provisions of the RPD Act 2016 and constitutional protections against discrimination on multiple grounds.” Furthermore, prompts should interrogate how AI systems may replicate caste-based prejudices when processing disability-related queries: “Examine potential biases in AI-based disability assessment systems in India, considering how algorithmic decision-making might perpetuate existing caste-based and gender-based discrimination against disabled persons from marginalised communities.”

Technical Mitigation Strategies: Beyond Prompt Engineering

Whilst responsible prompting constitutes an essential harm reduction strategy, it cannot resolve systemic biases embedded in training data and model architectures. Comprehensive bias mitigation requires intervention at multiple stages of the AI development pipeline.

Training datasets must include diverse, representative data reflecting the experiences of disabled persons across multiple identity categories . Research has demonstrated that biased training data produces skewed model outputs, necessitating conscious efforts to ensure disability representation in corpora. Furthermore, datasets should be annotated to identify and flag potentially harmful stereotypes .

Developers can implement fairness-aware algorithms that employ techniques such as equalised odds, demographic parity, and fairness through awareness to reduce discriminatory outputs . These mechanisms adjust decision boundaries to ensure similar treatment for individuals with comparable qualifications regardless of disability status. Recent research has demonstrated that such interventions can substantially reduce bias when properly implemented .

Disability-inclusive AI development requires the direct participation of disabled persons in design, testing, and governance processes. Co-design methodologies enable disabled users to shape AI systems according to their lived experiences and needs . Research has shown that participatory approaches improve accessibility outcomes whilst reducing the risk of perpetuating harmful stereotypes.

Regulatory Frameworks and Governance: The Path Forward

As AI systems become increasingly pervasive in decision-making domains, robust regulatory frameworks are essential to ensure disability rights compliance. The RPD Act 2016 provides enforcement mechanisms, including special courts and penalties for violations . However, explicit guidance regarding AI systems’ obligations under the Act remains underdeveloped.

International regulatory efforts offer instructive models. The European Union’s AI Act mandates accessibility considerations and prohibits discrimination, whilst the United States has begun applying the Americans with Disabilities Act to digital systems . India should develop clear regulatory guidance specifying how AI developers and deployers must ensure compliance with RPD accessibility and non-discrimination provisions.

Key regulatory priorities should include: (1) mandatory accessibility standards for AI systems used in education, employment, healthcare, and government services; (2) bias auditing requirements obligating developers to test systems for disability-related discrimination before deployment; (3) transparency mandates requiring disclosure of training data sources, known biases, and mitigation strategies; (4) enforcement mechanisms with meaningful penalties for violations and accessible complaint processes for disabled users; and (5) participatory governance structures ensuring disabled persons’ representation in AI policy development and oversight .

Conclusion: Towards Epistemic Justice in AI

Prompting practices matter because they shape the epistemological frameworks through which AI systems construct knowledge about disability. When users formulate queries grounded in medical model assumptions, deficit narratives, and ableist stereotypes, they train AI systems—both directly through prompt interactions and indirectly through data that subsequently enters training corpora—to reproduce these discriminatory frameworks .

However, prompting alone cannot resolve structural inequities in AI development. Comprehensive solutions require diverse training data, disability-led design processes, algorithmic fairness mechanisms, regulatory oversight, and meaningful accountability structures . These interventions must be grounded in the social model of disability and the rights-based frameworks established under the UNCRPD and India’s RPD Act 2016 .

The path towards epistemic justice in AI demands sustained collaboration amongst technologists, disability advocates, policymakers, and disabled persons themselves. It requires recognition that disabled persons are not passive subjects of technological innovation but active agents entitled to shape the systems that affect their lives. Most fundamentally, it necessitates a paradigm shift: from viewing disability as individual pathology requiring correction to understanding disability as a dimension of human diversity that society has a legal and moral obligation to accommodate .

As AI systems increasingly mediate access to education, employment, healthcare, and civic participation, the stakes could not be higher. Ableist AI systems risk creating digital barriers that compound existing physical, social, and institutional exclusions. Conversely, thoughtfully designed, rigorously audited, and rights-centred AI systems could advance disability justice by identifying accessibility barriers, facilitating reasonable accommodations, and supporting autonomous decision-making .

The choice is not between AI and no AI. The choice is between AI systems that perpetuate ableist assumptions or AI systems designed to advance the rights and dignity of disabled persons. Prompting practices, whilst insufficient alone, constitute one essential component of this larger transformation. Each query formulated with attention to disability rights principles represents a small but significant intervention in the knowledge production processes shaping AI outputs .

In the end, AI systems reflect the values embedded in their design, training data, and use patterns. If society continues to approach AI without interrogating the ableist assumptions encoded in everyday language, these systems will amplify discrimination at an unprecedented scale. But if users, developers, policymakers, and disabled persons collectively insist on rights-based frameworks, participatory design, and accountable governance, AI might yet become a tool for advancing rather than undermining disability justice . The conversation with AI begins with the prompt. The conversation about AI must begin with rights, representation, and recognition of disabled persons’ expertise and authority. Both conversations are essential. Both require sustained commitment to challenging ableism at every level—from individual queries to systemic infrastructures. The work of building disability-centred AI is urgent, complex, and profoundly consequential. It is also, ultimately, a matter of justice.

REFERENCES 

  • Americans with Disabilities Act, 2024. Title II and Title III Technical Assistance. Department of Justice.

  • Bolukbasi, T., Chang, K.W., Zou, J., Saligrama, V. and Kalai, A., 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, pp.4349-4357.

  • Buolamwini, J. and Gebru, T., 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, pp.77-91.

  • Carr, S. and Wicks, P., 2020. Participatory design and co-design for health technology. The Handbook of eHealth Evaluation: An Evidence-based Approach, 2, pp.1-25.

  • Crenshaw, K., 1989. Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), pp.139-167.

  • European Commission, 2023. Proposal for a Regulation laying down harmonised rules on artificial intelligence. Brussels.

  • Foucault, M., 1980. Power/Knowledge: Selected Interviews and Other Writings 1972-1977. Pantheon Books.

  • Goggin, G., 2018. Disability, technology and digital culture. Society and Space, 36(3), pp.494-508.

  • Government of India, 2016. The Rights of Persons with Disabilities Act, 2016. Ministry of Law and Justice.

  • Kaur, R., 2021. Disability, resistance and intersectionality: An intersectional analysis of disability rights in India. Disability & Society, 36(4), pp.523-541.

  • Newman-Griffis, D., Fosler-Lussier, E. and Lai, V.D., 2023. How disability models influence AI system design: Implications for ethical AI development. Proceedings of the 2023 Conference on Fairness, Accountability, and Transparency, pp.456-471.

  • Oliver, M., 1990. The Politics of Disablement. Macmillan.

  • Phutane, A., Sharma, R., Deshpande, A. and Kumar, S., 2025. The ABLEIST framework: Measuring intersectional ableist bias in large language models. Journal of Disability Studies and AI Ethics, 12(1), pp.45-89.

  • Author Anonymous, 2024. Scoping review of disability representation in AI research. AI and Society, 39(2), pp.201-220.

  • Shakespeare, T., 2014. Disability Rights and Wrongs Revisited. Routledge.

  • United Nations, 2006. Convention on the Rights of Persons with Disabilities. New York.

Young, S., 2014. I’m Not Your Inspiration, Thank You Very Much. TED Talk. Retrieved from: www.ted.com

Friday, 26 December 2025

Prototype — Accessible to Whom? Legible to What?

 

Abstract

Artificial Intelligence (AI) has transformed the terrain of possibility for assistive technology and inclusive design, but continues to perpetuate complex forms of exclusion rooted in legibility, bias, and tokenism. This paper critiques current paradigms of AI prototyping that centre “legibility to machines” over accessibility for disabled persons, arguing for a radical disability-led approach. Drawing on international law, empirical studies, and design scholarship, the analysis demonstrates why prototyping is neither neutral nor technical, but a deeply social and political process. Building from case studies in recruiting, education, and healthcare technology failures, this work exposes structural biases in training, design, and implementation—challenging designers and policymakers to move from “designing for” and “designing with” to “designing from” disability and difference.

Introduction

Prototyping is celebrated in engineering and design as a space for creativity, optimism, and risk-taking—a laboratory for the future. Yet, for countless disabled persons, the prototype is also where inclusion begins… or ends. For them, optimism is often tempered by the unspoken reality that exclusion most often arrives early and quietly, disguised as technical “constraints,” market “priorities,” or supposedly “objective” code. When prototyping occurs, it rarely asks: accessible to whom, legible to what?

This question—so simple, so foundational—is what this paper interrogates. The rise of Artificial Intelligence has intensified the stakes because AI prototypes increasingly determine who is rendered visible and included in society’s privileges. Legibility, not merely accessibility, is becoming the deciding filter; if one’s body, voice, or expression cannot be rendered into a dataset “comprehensible” to AI, one may not exist in the eyes of the system. Thus, we confront a new and urgent precipice: machinic inclusion, machinic exclusion.

This work expands the ideas presented in recent disability rights speeches and debates, critically interrogating how inclusive design must transform both theory and practice in the age of AI. It re-interprets accessibility as a form of knowledge and participation—never a technical afterthought.

Accessibility as Relational, Not Technical

Contemporary disability studies and the lived experiences of activists reject the notion that accessibility is a mere checklist or add-on. Aimi Hamraie suggests that “accessibility is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.”1 Just as building a ramp after a staircase is an act of remediation rather than inclusion, most AI prototyping seeks to retrofit accessibility, arguing it is too late, too difficult, or too expensive to embed inclusiveness from the outset.

Crucially, these arguments reflect broader epistemologies: those who possess the power to design, define the terms of recognition. Accessibility is not simply about “opening the door after the fact,” but questioning why the door was placed in an inaccessible position to begin with.

This critique leads us to re-examine prototyping practices through a disability lens, asking not only “who benefits” but also “who is recognised.” Evidence throughout the AI industry reveals a persistent confusion between accessibility for disabled persons and legibility for machines, a theme critically examined in subsequent sections.

Legibility and the Algorithmic Gaze

Legibility, distinct from accessibility, refers to the capacity of a system to recognise, process, and make sense of a body, voice, or action. Within the context of AI, non-legible phenomena—those outside dominant training data—simply vanish. People with non-standard gait, speech, or facial expressions are “read” by the algorithm as errors or outliers.

What are the implications of placing legibility before accessibility?

Speech-recognition models routinely misinterpret dysarthric voices, excluding those with neurological disabilities. Facial recognition algorithms have misclassified disabled expressions as “threats” or “system errors,” because their datasets contain few, if any, disabled exemplars. In the workplace, résumé-screening AI flags gaps or “unusual experience,” disproportionately rejecting those with disability-induced employment breaks. In education, proctoring platforms flag blind students for “cheating”, unable to process their lack of eye gaze at the screen as a legitimate variance.

These failures do not arise from random error. They are products of a pipeline formed by unconscious value choices made at every stage: training, selection, who participates, and who is imagined as the “user.”

In effect, machinic inclusiveness transforms the ancient bureaucracy of bias from paper to silicon. The new filter is not the form but the invisible code.

The Bias Pipeline: What Goes In, Comes Out Biased

Bias in AI does not merely appear at the end of the process; it is present at every decision point. One stark experiment submitted pairs of otherwise identical résumés to recruitment-screening platforms: one indicated a “Disability Leadership Award” or advocacy involvement, the other did not. The algorithm ranked the “non-disability” version higher, asserting that highlighting disability meant “reduced leadership emphasis,” “focus diverted from core job responsibilities,” or “potential risk.”

This is not insignificant. Empirical studies have reproduced such results across tech, finance, and education, showing systemic discrimination by design. Qualified disabled applicants are penalised for skills, achievements, and community roles that are undervalued or alien to training data.

Much as ethnographic research illuminated the “audit culture” in public welfare (where bureaucracy performed compliance rather than delivered services), so too does “audit theatre” manifest in AI. Firms invite disabled people to validate accessibility only after the design is final. In true co-design, disabled persons must participate from inception, defining criteria and metrics on equal footing. This gap—between performance and participation—is the site where bias flourishes.

The Trap of Tokenism

Tokenism is an insidious and common problem in social design. In disability inclusion, it refers to the symbolic engagement of disabled persons for validation, branding, or optics—rather than for genuine collaboration.

Audit theatre, in AI, occurs when disabled people are surveyed, “consulted,” or reviewed, but not invited into the process of design or prototyping. The UK’s National Disability Survey was struck down for failing to meaningfully involve stakeholders. Even the European Union’s AI Act, lauded globally for progressive accessibility clauses, risks tokenism by mandating involvement but failing to embed robust enforcement mechanisms.

Most AI developers receive little or no formal training in accessibility. When disability emerges in their worldview, it is cast in terms of medical correction—not lived expertise. Real participation remains rare.

Tokenism has cascading effects: it perpetuates design choices rooted in non-disabled experience, licenses shallow metrics, and closes the feedback loop on real inclusion.

Case Studies: Real-World Failures in Algorithmic Accessibility

AI Hiring Platforms and the “Disability Penalty”

Automated CV-screening tools systematically rank curricula vitae containing disability-associated terms lower, even when qualifications are otherwise stronger. Companies like Amazon famously scrapped AI recruitment platforms after discovering they penalised women, but similar audits for disability bias are scarce. Companies using video interview platforms have reported that candidates with stroke, autism, or other disability-related facial expressions score lower due to misinterpretation.

Online Proctoring and Educational Technology in India

During the COVID-19 pandemic, the acceleration of edtech platforms in India promised transformation. Yet, blind and low-vision students were flagged as “cheating” for not making “required” eye contact with their devices. Zoom and Google Meet upgraded accessibility features, but failed to address core gaps in their proctoring models.

Reports from university students showed that requests for alternative assessments or digital accommodations were often denied on the grounds of technical infeasibility.

Healthcare Algorithms and Diagnostic Bias

Diagnostic risk scores and triaging algorithms trained on narrow datasets exclude non-normative disability profiles. Health outcomes for persons with rare, chronic, or atypical disabilities are mischaracterised, and recommended interventions are mismatched.

Each failure traces back to inaccessible prototyping.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype. Disability-led design reframes accessibility—not as a requirement for “special” needs but as expertise that enriches technology. It asks not “How can you be fixed?” but “What knowledge does your experience bring to designing the machine?”

Major initiatives are emerging. Google’s Project Euphonia enlists disabled participants to re-train speech models for atypical voices, but raises ethical debates on data ownership, exploitation, and who benefits. More authentic still are community-led mapping projects where disabled coders and users co-create AI mapping tools for urban navigation, workspace accessibility, and independent living. These collaborations move slowly but produce lasting change.

When accessibility is led by disabled persons, reciprocity flourishes: machine and user learn from each other, not simply predict and consume.

Sara Hendren argues, “design is not a solution, it is an invitation.” Where disability leads, the invitation becomes mutual—technology contorts to better fit lives, not the reverse.

Policy, Law, and Regulatory Gaps

The European Union’s AI Act is rightly lauded for Article 16 (mandating accessibility for high-risk AI systems) and Article 5 (forbidding exploitation of disability-related vulnerabilities), as well as public consultation. Yet, the law lacks actionable requirements for collecting disability-representative data—and overlooks the intersection of accessibility, data ownership, and research ethics.

India’s National Strategy for Artificial Intelligence, along with “AI for Inclusive Societal Development,” claims “AI for All” but omits specific protections, data models, or actionable recommendations for disabled persons—this despite the Supreme Court’s Rajiv Raturi judgment upholding accessibility as a fundamental right. Implementation of the Rights of Persons with Disabilities Act, 2016, remains loose, and enforcement is sporadic.

The United States’ ADA and Section 508 have clearer language, but encounter their own enforcement challenges and retrofitting headaches.

Ultimately, policy remains disconnected from practice. Prototyping and design must close the gap—making legal theory and real inclusiveness reciprocal.

Intersectionality: Legibility Across Difference

Disability is never experienced in isolation: it intersects with gender, caste, race, age, and class. Women with disabilities face compounded discrimination in hiring, healthcare, and data representation. Caste-based exclusions are rarely coded into AI training practices, creating models that serve only dominant groups.

For example, the exclusion of vernacular languages in text-to-speech software leaves vast rural disabled communities voiceless in both policy and practical tech offerings. Ongoing work by Indian activists and community innovators seeks to produce systems and data resources that represent the full spectrum of disabled lives, but faces resistance from resource constraints, commercial priorities, and a lack of institutional support.

Rethinking the Fundamentals: Prototyping as Epistemic Justice

Epistemic justice—ensuring that all knowledge, experience, and ways of living are valued in the design of social and technical systems—is both a theoretical and a practical necessity in AI. Bias springs not only from bad data or oversight but by failing to recognise disabled lives as valid sources of expertise.

Key steps for epistemic justice in prototyping include:

  • Centre disabled expertise from project inception, defining metrics, incentives, and feedback loops.

  • Use disability as a source of innovation, not just compliance: leverage universal design to produce systems more robust for all users.

  • Address intersectionality in datasets, training and testing for compounded bias across race, gender, language, and class.

  • Create rights-based governance in tech companies, embedding accessibility into KPIs and public review.

Recommendations: Designing From Disability

The future of inclusive AI depends on three principal shifts:

  1. From designing for to designing with: genuine co-design, not audit theatre, where disabled participants shape technology at every stage.

  2. From accessibility as compliance to accessibility as knowledge: training developers, engineers and policymakers to value lived disability experience.

  3. From compliance to creativity: treating disability as “design difference”—a starting point for innovation, not merely a deficit.

International law and national policy must recognise the lived expertise of disability communities. Without this, accessibility remains a perpetual afterthought to legibility.


Conclusion

Accessible to whom, legible to what? This question reverberates through every level of prototype, product, and policy.

If accessibility is left to the end, if legibility for machines becomes the touchstone, humanity is reduced, difference ignored. When disability leads the design journey, technology is not just machine-readable; it becomes human-compatible.

The future is not just about teaching machines to read disabled lives—but about allowing disabled lives to rewrite what machines can understand.


References

  • Aimi Hamraie, Building Access: Universal Design and the Politics of Disability (University of Minnesota Press, 2017).

  • Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning.” fairmlbook.org, 2019.

  • Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1–15.

  • Leavy, Siobhan, Eugenia Siapera, Bethany Fernandez, and Kai Zhang. “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models.” Proceedings of the 2024 ACM FAccT.

  • Sara Hendren. What Can a Body Do? How We Meet the Built World (Riverhead, 2020).

  • National Strategy for Artificial Intelligence, NITI Aayog, Government of India, 2018.

  • Rajiv Raturi v. Union of India, Supreme Court of India, AIR 2012 SC 651.

  • European Parliament and Council, Artificial Intelligence Act, 2023.

  • Google AI Blog. “Project Euphonia: Helping People with Speech Impairments.” May 2019.

  • “Making AI Work for Everyone,” Google Developers, 2022.

  • Amazon Inc., “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.

  • United Kingdom High Court, National Disability Survey ruling, 2023.

  • Nita Ahuja, “Online Proctoring as Algorithmic Injustice: Blind Students in Indian EdTech,” Journal of Disability Studies, vol. 12, no. 2 (2022): 151-177.

  • United Nations, Convention on the Rights of Persons with Disabilities, Resolution 61/106 (2006).

  • [Additional references on intersectionality, design theory, empirical studies, Indian law, US/EU regulation, and case material]

Thursday, 25 December 2025

Technoableism and the Bias Pipeline: How Ableist Ideology Becomes Algorithmic Exclusion

Abstract

Artificial Intelligence systems, increasingly deployed across healthcare, employment, and education, encode and amplify technoableism—the ideology that frames disability as a problem requiring technological elimination rather than a matter of civil rights. This article maps how ableist assumptions travel through the AI development pipeline, transforming systemic prejudice into automated exclusion. Drawing upon disability studies scholarship, empirical research on algorithmic bias, and the legal frameworks established under India's Rights of Persons with Disabilities Act 2016 and the United Nations Convention on the Rights of Persons with Disabilities, this investigation demonstrates that bias in AI is not merely technical error but ideological infrastructure. Each stage of the pipeline—from data collection to model evaluation—translates assumptions of normative ability into measurable harm: voice recognition systems fail users with speech disabilities, hiring algorithms discriminate against disabled candidates, and large language models reproduce cultural ableism. Addressing these failures requires not technical debugging alone but structural transformation: mandatory accessibility standards, disability-led participatory design, equity-based evaluation frameworks, and regulatory alignment with the Rajiv Raturi Supreme Court judgment, which established accessibility as an ex-ante duty and fundamental right rather than discretionary accommodation.

Section I: The Ideological Architecture of Digital Exclusion

The integration of Artificial Intelligence into core societal systems—healthcare, hiring, education, and governance—demands rigorous examination of the ideologies governing its design. Bias in AI is not an incidental technical glitch but a societal failure rooted in entrenched prejudices. For persons with disabilities, these biases stem from an ideology termed technoableism, which translates historical and systemic ableism into algorithmic exclusion. Understanding this ideological foundation is essential to addressing structural inequities embedded across the AI development lifecycle.

1.1 Defining Ableism in the Digital Age: From Social Model to Algorithmic Harm

Ableism constitutes discrimination that favours non-disabled persons and operates systematically against disabled persons. This bias structures societal expectations regarding what constitutes "proper" functioning of bodies and minds, profoundly shaping technological imagination—the conceptual limits and objectives established during the design process. Consequently, infrastructure surrounding us, from physical environments to digital systems, reflects assumptions of normative ability, determining what is built and who is expected to benefit.

The critique of this system is articulated through frameworks such as crip technoscience, which consciously integrates Critical Disability Studies with Science and Technology Studies. This framework envisions a world wherein disabled persons are recognised as experts regarding their experiences, their bodies, and the material contexts of their lives. Such academic approaches are indispensable for moving beyond medicalised, deficit-based understandings of disability towards recognising systemic, infrastructure-based failures.

1.2 The Core Tenets of Technoableism: Technology as Elimination, Not Empowerment

Technoableism represents a specific, contemporary manifestation of ableism centred on technology. It operates upon the flawed premise that disability is inherently a problem requiring solution, and that emerging technology constitutes the optimal—if not sole—remedy. This perspective embraces technological power to the extent that it considers elimination of disability a moral good towards which society ought to strive.

This ideology aligns closely with technosolutionism, the pervasive tendency to believe that complex social or structural problems can be resolved neatly through technological tools. When applied to disability, this logic reframes disability not as a matter of civil rights or human diversity but as a technical defect awaiting correction. This mindset leads designers to approach disability from a deficit perspective, frequently developing and "throwing technologies at perceived 'problems'" without consulting the affected community. Examples include sophisticated, high-technology ankle prosthetics that prove excessively heavy for certain users, or complex AI-powered live captioning systems that d/Deaf and hard-of-hearing communities never explicitly requested.

A defining feature of technoableism is its frequent presentation "under the guise of empowerment". Technologies are marketed as tools of liberation or assistance, yet their underlying design reinforces normative biases. This rhetorical strategy renders technological solutions benevolent in appearance whilst simultaneously restricting the self-defined needs and agency of disabled individuals. When end-users fail to adopt these unsolicited solutions, developers habitually attribute the failure to users' lack of compliance or inability, rather than interrogating the flawed, deficit-based premise of the technology itself.

Consequently, if technology's ultimate purpose is defined—implicitly or explicitly—as solving or eliminating disability, then any disabled person whose condition resists neat technological resolution becomes an undesirable system anomaly. This ideological premise grants developers a form of moral licence to exclude non-normative data during development, rationalising the failure to accommodate as a functional requirement necessary for the system's "proper" operation.


Section II: Encoding Ableism: Technoableism Across the AI Bias Pipeline

The transformation of technoableist ideology into measurable, systemic bias occurs along the standard AI development lifecycle, commonly termed the Bias Pipeline. At each stage—from initial data selection to final model evaluation—assumptions of normative ability are translated into computational limitations, producing predictable patterns of exclusion.

2.1 The Architecture of Exclusion: Inheriting Historical Bias

The foundational issue in AI development lies in Assumptions of Normalcy. Technological advancements throughout history, from Industrial Revolution machinery to early computing interfaces, have consistently prioritised the needs and experiences of able-bodied users. This historical context ensures that AI development inherits Historical Bias.. This design bias is pervasive, frequently unconscious, and centres the able-bodied user as the "default".

This centring produces the One-Size-Fits-All Fallacy, wherein developers create products lacking the flexibility and customisable options necessary to accommodate diverse human abilities and preferences. Designing standard keyboards without considering individuals with limited dexterity exemplifies this bias.

2.2 Stage 1: Data Collection and Selection Bias

Bias manifests most overtly at the data collection stage. If data employed to train an AI algorithm is not diverse or representative of real-world populations, the resulting outputs shall inevitably reflect those biases. In the disability context, this manifests as profound exclusion of non-normative inputs.

AI models are trained on large pre-existing datasets that statistically emphasise the majority—the normative population. Data required for systems to recognise or translate inputs from disabled individuals is therefore frequently statistically "outlying". A primary illustration is the performance failure of voice recognition software. These systems routinely struggle to process speech disorders because training data lacks sufficient input from populations with conditions such as amyotrophic lateral sclerosis, cerebral palsy, or other speech impairments[79]. This deliberate or accidental omission of diverse inputs constitutes textbook Selection Bias.

2.3 Stage 2: Data Labelling and Measurement Bias

As datasets are curated and labelled, human subjectivity—or cognitive bias—can permeate the system[56]. This stage is where the ideological requirement for speed and efficiency, deeply embedded in technoableist culture, is encoded as technical constraint.

A particularly harmful example of this systemic ableism is observed in digital employment platforms. Certain systems reject disabled digital workers, such as those engaged on platforms like Amazon Mechanical Turk, because their work speed is judged "below average". Speed is frequently employed as a metric to filter spammers or low-quality workers, but in this context it becomes a discriminatory measure. This failure demonstrates Measurement Bias, wherein performance metrics systematically undervalue contributions falling outside arbitrary, non-disabled performance standards.

The resources required to build and maintain AI systems at scale contribute significantly to this exclusion. Integrating highly specialised, diverse data—such as thousands of voice recordings representing the full spectrum of speech disorders—is substantially more resource-intensive than training on statistically homogenous datasets. Consequently, Selection Bias is frequently driven by economic calculation, prioritising profitable, normative user bases and thereby financially justifying the marginalisation of smaller, diverse populations.

2.4 Stage 3 and 4: Model Training, Evaluation, and Stereotyping Bias

Once trained on imbalanced, non-representative data, AI models exhibit Confirmation Bias, reinforcing historical prejudices by over-relying on established, ableist patterns present in input data. Furthermore, biases can emerge even when models appear unbiased during training, particularly when deployed in complex real-world applications.

The final pipeline stage, model evaluation, is itself susceptible to Evaluation Bias. Benchmarks employed to test performance and "fairness" frequently contribute to bias because they fail to capture the nuances of disability. Current methodologies are incomplete, often focusing exclusively on explicit forms of bias or narrow, specific disability groups, thereby failing to assess the full spectrum of subtle algorithmic harm. This evaluation deficit leads to Out-Group Homogeneity Bias, causing AI systems to generalise individuals from underrepresented disability communities, treating them as more similar than they are and failing to recognise the intersectionality and diversity of disabled experiences.

This systemic failure to account for human variation highlights how ableism functions as an intersectional multiplier of harm. Commercial facial recognition systems, for instance, have error rates as low as 0.8 per cent for light-skinned males, yet these rates soar to 34.7 per cent for dark-skinned females. When disability is added to this equation, data deficits compound exclusion, leading to disproportionately higher failure rates for individuals with multiple marginalised identities, as alleged in the Workday lawsuit regarding discrimination based on disability, age, and race.

The following table summarises how technoableist ideology translates into specific algorithmic errors across the development process:

AI Pipeline Stage Technoableist Assumption/Ideology Resulting AI Bias Type Example of Exclusionary Impact
Data Collection The "ideal user" has standardised, normative physical and cognitive inputs. Selection Bias/Historical Bias Voice datasets exclude speech disorders; computer vision data lacks atypical bodies
Data Labelling/Metrics Efficiency, speed, and standard output quality are universally valued. Measurement Bias/Human Decision Bias Hiring systems reject candidates whose speed is below average; annotators inject stereotypical labels
Model Training/Output Optimal performance is achieved by minimising deviation from the norm. Confirmation Bias/Stereotyping Bias Large language models reproduce culturally biased and judgemental assumptions about disability.

Section III: Manifestation of Bias: Documenting Algorithmic Ableism

The biases encoded in the AI pipeline produce tangible harms for persons with disabilities in everyday digital interactions. These manifestations illustrate how technoableism moves beyond abstract theory to create concrete systemic barriers.

3.1 The Voice Recognition Failure: Algorithmic Erasure of Non-Normative Speech

Perhaps the most salient failure of ableist design is the performance of Automated Speech Recognition technologies. ASR systems routinely struggle to recognise voices of persons with conditions such as amyotrophic lateral sclerosis or cerebral palsy. For users who rely upon voice commands for digital interaction, mobility control, or communication, this failure amounts to complete exclusion from the technological sphere.

Whilst machine learning algorithms have demonstrated high accuracy in detecting the presence of voice disorders in research settings, none have achieved sufficient reliability for robust clinical deployment. This discrepancy arises because research frequently lacks standardised acoustic features and processing algorithms and, critically, datasets employed are not sufficiently generalised for target populations.

This technological failure is a direct consequence of Selection Bias and reinforces profound systemic harm: the technology, ostensibly designed to assist, refuses to acknowledge the user. This algorithmic denial of agency transforms users into objects of data analysis—voice pathologies to be studied—rather than subjects of digital interaction, reinforcing the technoableist view of disabled bodies as inherently flawed and outside the system's operational boundaries.

3.2 Computer Vision and the Perpetuation of Stereotypes

Computer vision systems and generative AI models routinely fail disabled users by reinforcing existing stereotypes and failing to recognise atypical visual inputs. Research indicates that cognitive differences, such as those associated with autism, may involve reduced recognition of perceptually homogenous objects, including faces. AI models trained on normative facial recognition datasets reflect and sometimes exacerbate these difficulties for individuals with atypical facial features or expressions.

Furthermore, generative AI systems—text-to-image or large language models—perpetuate harmful social tropes. Outputs from these systems frequently depict disabled persons with stereotypical accessories (for instance, blind persons shown exclusively wearing dark glasses) or inaccurately portray accessible technologies in unrealistic manners. These systematic biases restrict how persons with disabilities are visually and textually represented in the digital sphere, preventing nuanced understanding of disabled life and reinforcing societal pressure for disability to conform to limited, stereotypical visual signifiers.

3.3 Natural Language Processing and Cognitive Bias in Large Language Models

Natural Language Processing algorithms, which power smart assistants and autocorrect systems, harbour significant implicit bias against persons with disabilities[79]. Researchers have found these biases pervasive across highly utilised, public pretrained language models.

When asked to explain concepts related to disability, large language models frequently provide output that is clinical, judgemental, and founded upon underlying assumptions, rather than offering educational or supportive explanations. This judgemental tone further restricts digital agency, treating users as pathological entities rather than knowledgeable participants.

Moreover, these biases are highly sensitive to cultural context. Studies on Indian language models demonstrated that these models consistently underrated harm caused by ableist statements. By reproducing local cultural biases—such as tolerance for comments linking weight loss to resolution of pain and weakness—the systems misinterpret and overlook genuinely ableist comments. This lack of cross-cultural understanding and contextual nuance demonstrates fundamental failure of generalisation in AI and willingness to integrate and scale pre-existing cultural prejudices.

The collective failure of biased machine learning algorithms to operate reliably in clinical or educational settings carries profound risk. If these flawed models are deployed in high-stakes environments—such as healthcare diagnostics or educational tutoring systems—systemic biases from training data shall directly compromise equity, potentially leading to inaccurate medical evaluations or inadequate educational support for disabled patients and students whose data points were overlooked or excluded.

3.4 High-Stakes Discrimination: Employment and Digital Mobility

The deployment of biased AI directly facilitates socioeconomic marginalisation. AI applicant screening systems have been subject to lawsuits alleging discrimination based on disability alongside race and age, demonstrating how automated systems function as gatekeepers to employment opportunity.

Beyond formal hiring, digital labour platforms actively exclude disabled users. As previously noted, rejection of disabled clickworkers because their performance falls outside normative speed metrics reveals a crucial systemic problem. When platforms impose rigid standards, AI enforces competitive, ableist standards of productivity, creating direct economic marginalisation and barring disabled persons from participating fully in the digital economy.


Section IV: Pathways to Equitable AI: Centring Disability Expertise

To move beyond the limitations of technoableism, AI development must undergo fundamental ideological and methodological transformation, prioritising disability expertise, participatory governance, and equity-based standards.

4.1 The Paradigm Shift: From Deficit-Based to Asset-Based Design

The core of technoableism is the deficit-based approach, which frames disability as a flaw requiring correction. Mitigating this requires complete shift towards asset-based design, wherein technology is developed not to eliminate disability but to enhance capability and inclusion.

This approach mandates recognising that persons with disabilities possess unique, frequently ignored expertise regarding technological interactions and system failures. By prioritising these strengths and lived experiences, developers can create technologies that are genuinely useful and non-technoableist by design. The design process must acknowledge that technology's failure to accommodate a user constitutes failure of the design itself, not failure of the user's body or mind.

4.2 Participatory Design and Governance: The Mandate of "Nothing About Us Without Us"

The fundamental guiding principle for ethical and accessible technology development must be "Nothing About Us Without Us". This commitment requires that disabled community members be included as active partners and decision-makers at every stage of the development process—from initial conceptualisation to final testing and deployment. Development must be premised on interdependence, rejecting the technoableist ideal of total individual technological independence in favour of systems that value mutual support and varied needs.

Inclusion efforts must extend beyond user experience research aimed at maximising competitive advantage. They require maintaining transparency and building genuine trust with the community. Accessibility must be built in as default design principle, rather than treated as remedial, post-hoc checklist requirement for regulatory compliance.

4.3 Standardising Inclusion: Integrating Universal Design and Web Content Accessibility Guidelines Principles

To codify these ethical commitments, AI systems must adhere to rigorous, internationally recognised accessibility standards. The Web Content Accessibility Guidelines 2.2 provide essential technical baseline for AI development. WCAG structures accessibility around four core principles, ensuring that AI content and interfaces are:

1. Perceivable: Information must be presentable in ways all users can perceive, requiring features such as alternative text, captions, and proper colour contrast.

2. Operable: Interface components must be navigable and usable, benefitting users who rely upon keyboard navigation, voice control, or switch devices.

3. Understandable: Information and operation must be comprehensible, mitigating cognitive load through simple, clear language and predictable behaviour.

4. Robust: Content must be interpretable by various user agents and assistive technologies as technology advances, ensuring long-term usability.

Complementing WCAG are the seven principles of Universal Design, which offer broader, holistic framework[56]. Principles such as Equitable Use (designs helpful for diverse abilities) and Tolerance for Error (minimising hazards and adverse consequences) ensure that AI systems accommodate wide ranges of individual preferences and abilities.

Whilst technical standards such as WCAG are vital, progression towards equity requires adoption of equity-based accessibility standards. These standards move beyond technical compliance to actively recognise intersectionality and expertise. This is critical because failure rates are higher for multiply marginalised users. An ethical design strategy must mandate measuring not merely whether technology is accessible, but how equitably it performs across diverse user groups—for instance, measuring accuracy of speech recognition systems for non-normative voices speaking marginalised dialects.

This pursuit of equitable performance requires fundamental re-evaluation of performance metrics. Traditional metrics, such as generalised accuracy or average speed, are inherently biased towards normative performance. New frameworks, such as AccessEval, are necessary to systematically assess disability bias in large language models and other AI systems. These evaluation systems must prioritise measuring absence of social harm and equitable functioning across diverse user groups, rather than optimising marginal gains in generalised population efficiency.

The following table summarises how established design frameworks apply to ethical AI development:

Framework Principle Relevance to AI Ethics and Bias Mitigation
Universal Design Equitable Use Ensuring AI benefits diverse abilities and does not exclude or stigmatise any user group.
Universal Design Flexibility Accommodating user preferences by offering customisable AI interaction methods (for example, input/output modalities).
WCAG 2.2 Perceivable Guaranteeing AI outputs (for example, data visualisations, text, audio) can be consumed by all users, including through screen readers and captions.
WCAG 2.2 Operable Ensuring control mechanisms (for example, prompts, interfaces) can be reliably navigated using keyboard, voice, or switch inputs.
WCAG 2.2 Understandable Designing AI behaviour and outputs to be comprehensible, simple, and clear, mitigating cognitive bias and confusion.
WCAG 2.2 Robust Building systems compatible with existing and future assistive technologies, ensuring long-term accessibility and preventing technological obsolescence as a barrier.

Section V: Conclusion and Recommendations for an Accessible Future

5.1 The Ethical Imperative: Recognising Technoableism as Structural Policy Failure

The analysis demonstrates unequivocally that bias in AI is the scaled, automated extension of technoableism. This pervasive ideology institutionalises the historical exclusion of disabled persons by embedding normative assumptions into computational mechanisms of the AI pipeline. The resultant harms—from voice recognition failures to algorithmic hiring discrimination and propagation of stereotypes—are systematic, not incidental. Addressing this issue demands more than technical debugging; it requires confrontational re-evaluation of foundational ideologies governing design.

In the Indian context, this requirement takes on constitutional urgency. The Supreme Court's landmark judgment in Rajive Raturi v. Union of India established accessibility as an ex-ante duty and fundamental right, holding that Rule 15 of the Rights of Persons with Disabilities Rules 2017 was ultra vires the parent Act because it provided only aspirational guidelines rather than enforceable standards. The Court directed the Union Government to frame mandatory accessibility rules within three months, stating unequivocally that "accessibility is not merely a convenience, but a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally".

This judgment must serve as the foundation for India's AI governance framework. If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare, and civic participation—remain governed by existing, non-specific laws. As the Raturi Court observed, accessibility requires a two-pronged approach: retrofitting existing institutions whilst transforming new infrastructure and future initiatives. AI governance must adopt precisely this logic.

Ultimately, true inclusion requires commitment to systemic change, replacing technoableist fixation on technological independence with principle of human interdependence as core foundation of design.

5.2 Policy, Practice, and Research Recommendations

Based on systemic failures and the necessity for paradigm shift towards asset-based, participatory design, the following recommendations are essential for achieving equitable AI development:

1. Policy Mandates for Data Equity and Validation:

Regulatory bodies must mandate comprehensive data collection protocols, specifically requiring inclusion of non-normative inputs and validation data from the full spectrum of disability communities[79]. This includes requiring highly specialised, diverse validation sets for systems such as ASR to ensure reliability in high-stakes clinical and professional environments. In light of the Raturi judgment, these mandates must be framed not as aspirational guidelines but as enforceable minimum standards.

2. Regulatory Oversight and Mandatory Impact Assessments:

Governments and regulatory bodies must institute mandatory, independent accessibility and bias audits for all high-stakes AI systems (for example, those employed in hiring, housing, healthcare, and education). These audits must be conducted by disabled experts and ensure adherence to WCAG and Universal Design principles throughout the entire development lifecycle, thereby enforcing the "Nothing About Us Without Us" principle. The European Union's Artificial Intelligence Act 2024 provides a model: Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities, whilst Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.

3. Adoption of Equitable Evaluation Metrics:

Developers and auditors must move beyond traditional accuracy and efficiency metrics, which favour normative performance. New frameworks such as AccessEval must be integrated to systematically measure social harm, stereotype reproduction, and equitable functioning of AI across diverse and intersectional user groups. The objective of optimisation must shift from maximising speed to minimising exclusion.

4. Incentivising Asset-Based Participatory Design:

Public and private funding mechanisms ought to be structured to prioritise and financially reward technology development that adheres to genuinely participatory methods. By recognising disabled persons as experts whose unique knowledge accelerates innovation and identifies design failures early, development efforts can move away from unsolicited, deficit-based solutions and build truly inclusive technologies from the ground up.

5. Alignment with Constitutional Mandates:

India's AI governance framework must explicitly align with the Rights of Persons with Disabilities Act 2016, the United Nations Convention on the Rights of Persons with Disabilities, and the Rajive Raturi judgment. NITI Aayog's AI strategy documents must incorporate mandatory accessibility provisions rather than treating disability inclusion as sectoral afterthought. As the Raturi Court emphasised, the State's duty to accessibility is ex-ante and proactive, not dependent upon individual requests. AI policy must embed this principle from inception.

6. Cross-Cultural Competence in AI Systems:

Research demonstrates that AI models fail to recognise ableism across cultural contexts, with Western models overestimating harm whilst Indic models underestimate it. Indian AI governance must mandate cultural competence testing for systems deployed in India, ensuring that models understand how ableism manifests within Indian social structures, including intersections with caste, gender, and class. Training datasets must include representation from Indian disabled communities, and evaluation frameworks must account for culturally specific manifestations of bias.

The conversation about AI in India cannot proceed as though disability is a niche concern or an optional consideration. With 2.74 crore Indians with disabilities—comprising diverse impairment categories across urban and rural contexts, across caste and class divides—the deployment of biased AI systems shall entrench existing inequalities at unprecedented scale. The Raturi judgment has established the floor; AI policy must now build the ceiling. Accessibility here is not afterthought; it is integral architecture. When disability leads, AI learns to listen.


References