Artificial Intelligence (AI) has transformed the terrain of possibility for assistive technology and inclusive design, but continues to perpetuate complex forms of exclusion rooted in legibility, bias, and tokenism. This paper critiques current paradigms of AI prototyping that centre “legibility to machines” over accessibility for disabled persons, arguing for a radical disability-led approach. Drawing on international law, empirical studies, and design scholarship, the analysis demonstrates why prototyping is neither neutral nor technical, but a deeply social and political process. Building from case studies in recruiting, education, and healthcare technology failures, this work exposes structural biases in training, design, and implementation—challenging designers and policymakers to move from “designing for” and “designing with” to “designing from” disability and difference.
Prototyping is celebrated in engineering and design as a space for creativity, optimism, and risk-taking—a laboratory for the future. Yet, for countless disabled persons, the prototype is also where inclusion begins… or ends. For them, optimism is often tempered by the unspoken reality that exclusion most often arrives early and quietly, disguised as technical “constraints,” market “priorities,” or supposedly “objective” code. When prototyping occurs, it rarely asks: accessible to whom, legible to what?
This question—so simple, so foundational—is what this paper interrogates. The rise of Artificial Intelligence has intensified the stakes because AI prototypes increasingly determine who is rendered visible and included in society’s privileges. Legibility, not merely accessibility, is becoming the deciding filter; if one’s body, voice, or expression cannot be rendered into a dataset “comprehensible” to AI, one may not exist in the eyes of the system. Thus, we confront a new and urgent precipice: machinic inclusion, machinic exclusion.
This work expands the ideas presented in recent disability rights speeches and debates, critically interrogating how inclusive design must transform both theory and practice in the age of AI. It re-interprets accessibility as a form of knowledge and participation—never a technical afterthought.
Contemporary disability studies and the lived experiences of activists reject the notion that accessibility is a mere checklist or add-on. Aimi Hamraie suggests that “accessibility is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.”1 Just as building a ramp after a staircase is an act of remediation rather than inclusion, most AI prototyping seeks to retrofit accessibility, arguing it is too late, too difficult, or too expensive to embed inclusiveness from the outset.
Crucially, these arguments reflect broader epistemologies: those who possess the power to design, define the terms of recognition. Accessibility is not simply about “opening the door after the fact,” but questioning why the door was placed in an inaccessible position to begin with.
This critique leads us to re-examine prototyping practices through a disability lens, asking not only “who benefits” but also “who is recognised.” Evidence throughout the AI industry reveals a persistent confusion between accessibility for disabled persons and legibility for machines, a theme critically examined in subsequent sections.
Legibility, distinct from accessibility, refers to the capacity of a system to recognise, process, and make sense of a body, voice, or action. Within the context of AI, non-legible phenomena—those outside dominant training data—simply vanish. People with non-standard gait, speech, or facial expressions are “read” by the algorithm as errors or outliers.
What are the implications of placing legibility before accessibility?
Speech-recognition models routinely misinterpret dysarthric voices, excluding those with neurological disabilities. Facial recognition algorithms have misclassified disabled expressions as “threats” or “system errors,” because their datasets contain few, if any, disabled exemplars. In the workplace, résumé-screening AI flags gaps or “unusual experience,” disproportionately rejecting those with disability-induced employment breaks. In education, proctoring platforms flag blind students for “cheating”, unable to process their lack of eye gaze at the screen as a legitimate variance.
These failures do not arise from random error. They are products of a pipeline formed by unconscious value choices made at every stage: training, selection, who participates, and who is imagined as the “user.”
In effect, machinic inclusiveness transforms the ancient bureaucracy of bias from paper to silicon. The new filter is not the form but the invisible code.
Bias in AI does not merely appear at the end of the process; it is present at every decision point. One stark experiment submitted pairs of otherwise identical résumés to recruitment-screening platforms: one indicated a “Disability Leadership Award” or advocacy involvement, the other did not. The algorithm ranked the “non-disability” version higher, asserting that highlighting disability meant “reduced leadership emphasis,” “focus diverted from core job responsibilities,” or “potential risk.”
This is not insignificant. Empirical studies have reproduced such results across tech, finance, and education, showing systemic discrimination by design. Qualified disabled applicants are penalised for skills, achievements, and community roles that are undervalued or alien to training data.
Much as ethnographic research illuminated the “audit culture” in public welfare (where bureaucracy performed compliance rather than delivered services), so too does “audit theatre” manifest in AI. Firms invite disabled people to validate accessibility only after the design is final. In true co-design, disabled persons must participate from inception, defining criteria and metrics on equal footing. This gap—between performance and participation—is the site where bias flourishes.
Tokenism is an insidious and common problem in social design. In disability inclusion, it refers to the symbolic engagement of disabled persons for validation, branding, or optics—rather than for genuine collaboration.
Audit theatre, in AI, occurs when disabled people are surveyed, “consulted,” or reviewed, but not invited into the process of design or prototyping. The UK’s National Disability Survey was struck down for failing to meaningfully involve stakeholders. Even the European Union’s AI Act, lauded globally for progressive accessibility clauses, risks tokenism by mandating involvement but failing to embed robust enforcement mechanisms.
Most AI developers receive little or no formal training in accessibility. When disability emerges in their worldview, it is cast in terms of medical correction—not lived expertise. Real participation remains rare.
Tokenism has cascading effects: it perpetuates design choices rooted in non-disabled experience, licenses shallow metrics, and closes the feedback loop on real inclusion.
AI Hiring Platforms and the “Disability Penalty”
Automated CV-screening tools systematically rank curricula vitae containing disability-associated terms lower, even when qualifications are otherwise stronger. Companies like Amazon famously scrapped AI recruitment platforms after discovering they penalised women, but similar audits for disability bias are scarce. Companies using video interview platforms have reported that candidates with stroke, autism, or other disability-related facial expressions score lower due to misinterpretation.
Online Proctoring and Educational Technology in India
During the COVID-19 pandemic, the acceleration of edtech platforms in India promised transformation. Yet, blind and low-vision students were flagged as “cheating” for not making “required” eye contact with their devices. Zoom and Google Meet upgraded accessibility features, but failed to address core gaps in their proctoring models.
Reports from university students showed that requests for alternative assessments or digital accommodations were often denied on the grounds of technical infeasibility.
Healthcare Algorithms and Diagnostic Bias
Diagnostic risk scores and triaging algorithms trained on narrow datasets exclude non-normative disability profiles. Health outcomes for persons with rare, chronic, or atypical disabilities are mischaracterised, and recommended interventions are mismatched.
Each failure traces back to inaccessible prototyping.
If the problem lies in who defines legibility, the solution lies in who leads the prototype. Disability-led design reframes accessibility—not as a requirement for “special” needs but as expertise that enriches technology. It asks not “How can you be fixed?” but “What knowledge does your experience bring to designing the machine?”
Major initiatives are emerging. Google’s Project Euphonia enlists disabled participants to re-train speech models for atypical voices, but raises ethical debates on data ownership, exploitation, and who benefits. More authentic still are community-led mapping projects where disabled coders and users co-create AI mapping tools for urban navigation, workspace accessibility, and independent living. These collaborations move slowly but produce lasting change.
When accessibility is led by disabled persons, reciprocity flourishes: machine and user learn from each other, not simply predict and consume.
Sara Hendren argues, “design is not a solution, it is an invitation.” Where disability leads, the invitation becomes mutual—technology contorts to better fit lives, not the reverse.
The European Union’s AI Act is rightly lauded for Article 16 (mandating accessibility for high-risk AI systems) and Article 5 (forbidding exploitation of disability-related vulnerabilities), as well as public consultation. Yet, the law lacks actionable requirements for collecting disability-representative data—and overlooks the intersection of accessibility, data ownership, and research ethics.
India’s National Strategy for Artificial Intelligence, along with “AI for Inclusive Societal Development,” claims “AI for All” but omits specific protections, data models, or actionable recommendations for disabled persons—this despite the Supreme Court’s Rajiv Raturi judgment upholding accessibility as a fundamental right. Implementation of the Rights of Persons with Disabilities Act, 2016, remains loose, and enforcement is sporadic.
The United States’ ADA and Section 508 have clearer language, but encounter their own enforcement challenges and retrofitting headaches.
Ultimately, policy remains disconnected from practice. Prototyping and design must close the gap—making legal theory and real inclusiveness reciprocal.
Intersectionality: Legibility Across Difference
Disability is never experienced in isolation: it intersects with gender, caste, race, age, and class. Women with disabilities face compounded discrimination in hiring, healthcare, and data representation. Caste-based exclusions are rarely coded into AI training practices, creating models that serve only dominant groups.
For example, the exclusion of vernacular languages in text-to-speech software leaves vast rural disabled communities voiceless in both policy and practical tech offerings. Ongoing work by Indian activists and community innovators seeks to produce systems and data resources that represent the full spectrum of disabled lives, but faces resistance from resource constraints, commercial priorities, and a lack of institutional support.
Epistemic justice—ensuring that all knowledge, experience, and ways of living are valued in the design of social and technical systems—is both a theoretical and a practical necessity in AI. Bias springs not only from bad data or oversight but by failing to recognise disabled lives as valid sources of expertise.
Key steps for epistemic justice in prototyping include:
Centre disabled expertise from project inception, defining metrics, incentives, and feedback loops.
Use disability as a source of innovation, not just compliance: leverage universal design to produce systems more robust for all users.
Address intersectionality in datasets, training and testing for compounded bias across race, gender, language, and class.
Create rights-based governance in tech companies, embedding accessibility into KPIs and public review.
The future of inclusive AI depends on three principal shifts:
From designing for to designing with: genuine co-design, not audit theatre, where disabled participants shape technology at every stage.
From accessibility as compliance to accessibility as knowledge: training developers, engineers and policymakers to value lived disability experience.
From compliance to creativity: treating disability as “design difference”—a starting point for innovation, not merely a deficit.
International law and national policy must recognise the lived expertise of disability communities. Without this, accessibility remains a perpetual afterthought to legibility.
Accessible to whom, legible to what? This question reverberates through every level of prototype, product, and policy.
If accessibility is left to the end, if legibility for machines becomes the touchstone, humanity is reduced, difference ignored. When disability leads the design journey, technology is not just machine-readable; it becomes human-compatible.
The future is not just about teaching machines to read disabled lives—but about allowing disabled lives to rewrite what machines can understand.
Aimi Hamraie, Building Access: Universal Design and the Politics of Disability (University of Minnesota Press, 2017).
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning.” fairmlbook.org, 2019.
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1–15.
Leavy, Siobhan, Eugenia Siapera, Bethany Fernandez, and Kai Zhang. “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models.” Proceedings of the 2024 ACM FAccT.
Sara Hendren. What Can a Body Do? How We Meet the Built World (Riverhead, 2020).
National Strategy for Artificial Intelligence, NITI Aayog, Government of India, 2018.
Rajiv Raturi v. Union of India, Supreme Court of India, AIR 2012 SC 651.
European Parliament and Council, Artificial Intelligence Act, 2023.
Google AI Blog. “Project Euphonia: Helping People with Speech Impairments.” May 2019.
“Making AI Work for Everyone,” Google Developers, 2022.
Amazon Inc., “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.
United Kingdom High Court, National Disability Survey ruling, 2023.
Nita Ahuja, “Online Proctoring as Algorithmic Injustice: Blind Students in Indian EdTech,” Journal of Disability Studies, vol. 12, no. 2 (2022): 151-177.
United Nations, Convention on the Rights of Persons with Disabilities, Resolution 61/106 (2006).
[Additional references on intersectionality, design theory, empirical studies, Indian law, US/EU regulation, and case material]