Translate

Showing posts with label assistive technology. Show all posts
Showing posts with label assistive technology. Show all posts

Tuesday, 17 February 2026

TechnoAbleism in India’s AI Moment: Why Accessibility Is Not Enough

A vibrant abstract illustration showing people with disabilities interacting with digital systems, surrounded by AI symbols, datasets, and decision interfaces, highlighting tensions between accessibility and algorithmic bias.
When artificial intelligence is built on narrow assumptions of the “normal” user, accessibility features alone cannot prevent exclusion embedded within the algorithm itself.

India’s present moment in artificial intelligence is often described in terms of innovation, opportunity, and national technological leadership. The India AI Impact Summit brings global attention to how artificial intelligence is shaping governance, development, and social transformation. 

Within these discussions, disability is increasingly visible through conversations on accessibility, assistive technologies, and digital inclusion. This attention is important. For many years, disability was largely absent from technology policy debates. Yet, a deeper issue remains insufficiently examined: accessibility alone does not ensure inclusion when artificial intelligence systems themselves are shaped by structural bias.

Accessibility and bias are frequently treated as interchangeable ideas. They are not the same. Accessibility determines whether a person with disability can use a system. Bias determines whether the system was designed with that person in mind at all. When systems are built around assumptions about a so-called normal user, accessible interfaces merely allow disabled persons to enter environments that continue to exclude them through their internal logic. The interface may be open; the opportunity may still be closed.

This structural problem becomes visible in the rapidly expanding practice often called ‘vibe coding’, where developers use generative AI tools to create websites and software through simple prompts. When an AI coding assistant is asked to generate a webpage, the default output usually prioritises visual layouts, mouse-dependent navigation, and animation-heavy design. Accessibility features such as semantic structure, keyboard navigation, or screen-reader compatibility rarely appear unless they are explicitly demanded. The system has learned that the ‘default’ user is non-disabled because that assumption dominates the data from which it learned. As these outputs are reproduced across applications and services, exclusion becomes quietly automated.

Bias also appears in the decision-making systems that increasingly shape employment, education, financial access and public services. Hiring systems that analyse speech, expression, or behavioural patterns may interpret disability-related communication styles as indicators of low confidence or low performance. Speech recognition tools often struggle with atypical speech patterns. Vision systems may fail to recognise assistive devices correctly. These outcomes are not isolated technical errors. They arise because disability is often missing from training datasets, testing environments and design teams. When disability is absent from the design stage, the system internalises non-disabled behaviour as the baseline expectation.

Another less visible dimension of bias emerges from the way artificial intelligence systems classify behaviour. Many systems are trained to recognise patterns associated with what developers consider efficient, confident or normal interaction. When human diversity falls outside those patterns, the system may interpret difference as error. Research in AI ethics repeatedly shows that classification models tend to perform poorly when training datasets do not adequately represent disabled users, leading to systematic misinterpretation of speech, movement or communication styles. 

These classification failures are rarely dramatic; they appear as small inaccuracies that accumulate over time. A speech interface that repeatedly fails to understand a user, an automated assessment tool that consistently undervalues atypical communication, or a recognition system that misidentifies assistive devices can gradually shape unequal access to opportunities. As these outcomes arise from technical assumptions rather than explicit discrimination, they often remain invisible in public debates, even as their effects are widely experienced.

These patterns together reflect what disability scholars describe as techno-ableism - the tendency of technological systems to appear empowering while quietly reinforcing assumptions that favour non-disabled ways of functioning. Technologies may expand participation on the surface, yet the intelligence embedded within them continues to treat disability as deviation rather than diversity. A person with disability may be able to access the interface, log into the system or navigate the platform, yet still face exclusion through hiring algorithms, recognition systems, or automated decision tools that were never designed around diverse bodies and minds. The experience is not exclusion from technology, but exclusion within technology itself.

Public discussions frequently present disability mainly through assistive innovation: tools that help blind users read text, applications that assist persons with mobility impairments or systems designed for specific accessibility functions. These innovations are valuable and necessary. However, when disability appears only in assistive contexts, it is positioned as a specialised technological niche rather than a structural dimension of all artificial intelligence systems. The mainstream design pipeline continues to assume the non-disabled user as the default, while disability inclusion becomes an add-on layer introduced later.

India currently stands at a formative stage in shaping its artificial intelligence ecosystem. As public digital infrastructure, governance platforms and automated service systems expand, the assumptions embedded in present design choices will influence social participation for decades. If accessibility becomes the only measure of inclusion, structural bias risks becoming embedded within the foundations of emerging technological systems. Inclusion then becomes symbolic rather than substantive: systems appear inclusive because they are accessible, yet continue to produce unequal outcomes.

From the standpoint of persons with disabilities, this distinction is deeply personal. Accessibility determines whether we can interact with the system. Bias determines whether the system recognises us as equal participants once we enter. Accessible platforms built upon biased intelligence do not remove barriers; they simply move the barrier from the interface to the algorithm.

As a disability rights practitioner working at the intersection of law, accessibility, and technology, I view the present expansion of AI discussions with cautious attention. Disability is finally visible in national technology conversations, yet the focus remains concentrated on accessibility demonstrations rather than the deeper question of structural bias. Artificial intelligence will increasingly shape employment, governance, education and everyday social participation. Whether these systems expand equality or quietly reproduce exclusion will depend not only on whether they are accessible, but also on whose experiences shape the data, assumptions, and decision rules within them.

Accessibility opens the door; fairness determines what happens after entry. Without confronting bias directly, technological progress risks creating a future that is digitally reachable yet socially unequal for many persons with disabilities. Many of the issues discussed here, including the structural relationship between accessibility and algorithmic bias, are explored in greater detail at The Bias Pipeline (https://thebiaspipeline.nileshsingit.org), where readers may engage with further analysis.

References

  • India AI Impact Summit official information portal, Government of India.
  • Coverage of summit accessibility and inclusion themes, Business Standard and related reporting.
  • United Nations and global policy discussions on AI and disability inclusion.
  • Nilesh Singit, The Bias Pipeline https://thebiaspipeline.nileshsingit.org/

(Nilesh Singit is a disability rights practitioner and accessibility strategist working at the intersection of law, governance, and AI inclusion. A Distinguished Research Fellow at the Centre for Disability Studies, NALSAR University of Law, he writes on accessibility, techno-ableism, and algorithmic bias at www.nileshsingit.org)



Moneylife.in
Published 17th Fevruary 202

MonelifeLlogo
MoneyLife.in


Saturday, 14 February 2026

The Inclusivity Stack: Operationalising Disability Justice in India’s Sovereign AI Architecture

Inclusivity Stack: Operationalising Equity, Accessibility & Inclusion,” showing a layered pyramid representing organisational inclusion. From bottom to top, the layers read “Physical Accessibility,” “Tools & Technology,” “Policies & Processes,” and “Culture & Awareness,” with diverse disabled and non-disabled people standing on the top layer, symbolising inclusive organisational culture supported by foundational accessibility systems.
The Inclusivity Stack

Abstract

The Government of India’s strategic pivot towards "Sovereign Artificial Intelligence," crystallised in the ₹10,371 crore IndiaAI Mission, represents a watershed moment in the nation’s digital governance trajectory. As the state moves to integrate Artificial Intelligence (AI) into the foundational layer of Digital Public Infrastructure (DPI)—spanning healthcare, agriculture, and urban governance—it faces a critical architectural choice: to replicate the exclusionary patterns of the "medical model" of disability or to operationalise a "social model" that views accessibility as a non-negotiable constitutional guarantee. This report proposes the "Inclusivity Stack," a comprehensive governance and technical framework designed to embed disability justice into the IndiaAI ecosystem. Drawing extensively on the Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (2024), the Rights of Persons with Disabilities (RPWD) Act, 2016, and global best practices such as the EU AI Act and Canada’s CAN-ASC-6.2 standard, this document outlines a roadmap for "fixing" the digital environment rather than the individual. It argues that the inclusion of India’s 26.8 million persons with disabilities is not merely a moral imperative but a prerequisite for the mathematical robustness, legal validity, and economic viability of India’s sovereign AI ambitions.

1. Introduction: The Sovereign AI Moment and the Risk of Digital Apartheid

1.1 The Genesis of the IndiaAI Mission

In March 2024, the Union Cabinet approved the IndiaAI Mission with a substantial budgetary outlay of ₹10,371.92 crore, signaling India’s intent to move from being a consumer of Western AI models to a creator of indigenous, sovereign AI capabilities.1 This mission is structurally organised around seven distinct pillars, designed to democratise access to computing power and data:

  1. IndiaAI Compute Pillar: The deployment of over 38,000 Graphics Processing Units (GPUs) to provide affordable computational infrastructure to startups and researchers.2
  2. IndiaAI Application Development Initiative: Targeting critical sectors such as healthcare, agriculture, and governance.2
  3. AIKosh (Dataset Platform): A unified repository for high-quality, non-personal datasets to train indigenous models.3
  4. IndiaAI Foundation Models (BharatGen): The development of "BharatGen," a sovereign Large Multimodal Model (LMM) trained on diverse Indic languages and datasets.4
  5. IndiaAI FutureSkills: Aimed at expanding the AI talent pool through academic and vocational training.2
  6. IndiaAI Startup Financing: Venture capital support for deep-tech AI startups.6
  7. Safe & Trusted AI: A framework for responsible AI governance, including the establishment of the IndiaAI Safety Institute (AISI).7

While the mission’s scale is ambitious, aiming to catalyse a $1.7 trillion contribution to the Indian economy by 2035 2, its current architectural blueprint lacks explicit mechanisms to address the "digital apartheid" faced by Persons with Disabilities (PwDs). In a nation where internet access is already stratified by caste, class, and geography, the uncritical deployment of AI threatens to deepen these divides.

1.2 The "Data Void" and Algorithmic Exclusion

The exclusion of PwDs from the digital ecosystem is not accidental but systemic, often described as a "data void." Contemporary AI systems are predominantly trained on data that reflects the "normative" able-bodied user.

  • Speech Recognition: Models trained on standard datasets often fail to recognise dysarthric speech (common in conditions like cerebral palsy) or the vocal patterns of the deaf community.8
  • Computer Vision: Facial recognition systems, such as those used in the DigiYatra biometric boarding initiative, are frequently trained on datasets that lack representation of individuals with facial differences, Down syndrome, or palsy, leading to higher failure rates for these groups.9
  • Natural Language Processing (NLP): Large Language Models (LLMs) often hallucinate "cures" or offer patronizing advice when users disclose a disability, reflecting the biases inherent in their training corpora.11

If the IndiaAI Mission proceeds without rectifying these voids, the "Sovereign AI" infrastructure will effectively become a "Sovereign Exclusion Mechanism," automating the denial of services to the most vulnerable citizens.

1.3 The Economic and Constitutional Imperative

The argument for inclusion is not solely humanitarian; it is economic and constitutional.

  • Economic Cost: Excluding PwDs from the digital economy limits the potential GDP growth that the IndiaAI Mission seeks to unlock. Accessible technology enables workforce participation for millions who are currently marginalized.13
  • Constitutional Mandate: The Supreme Court of India, in Rajive Raturi v. Union of India (2024), explicitly held that accessibility is a facet of the Fundamental Right to Life (Article 21) and Equality (Article 14).14 The Court mandated that the "State has an obligation to ensure that all steps... are taken" to ensure accessibility in "information, technology and entertainment".16

This report articulates the "Inclusivity Stack"—a layered framework to operationalise these legal and ethical mandates within the technical architecture of the IndiaAI Mission.

2. Theoretical Framework: De-Medicalising Artificial Intelligence

To build an inclusive AI architecture, policy-makers must first interrogate and dismantle the theoretical models of disability that currently inform—often subconsciously—the development of AI systems.

2.1 The Medical Model vs. The Social Model in Code

The development of AI has historically been rooted in the Medical Model of Disability. This model views disability as a "deficit," "pathology," or "aberration" residing within the individual that requires diagnosis, treatment, or cure.17

  • In AI Development: This manifests in data annotation practices where non-normative behaviors (e.g., lack of eye contact in autism, stuttering in speech) are labeled as "errors," "noise," or "negative samples" to be filtered out.11
  • The Consequence: An AI system trained on this model views a disabled user as a "broken" user. A proctoring algorithm flags a neurodivergent student’s movements as "suspicious" 20; a hiring algorithm ranks a candidate with a disability lower because their resume signals a "deviation" from the norm.12

In contrast, the Social Model of Disability, which underpins the UN Convention on the Rights of Persons with Disabilities (UNCRPD), posits that disability is constructed by societal barriers—physical, attitudinal, and digital—that prevent full participation.21

  • In AI Development: Operationalising the Social Model requires shifting the focus from "fixing the user" to "fixing the system." It demands that AI interfaces be designed to accommodate diverse modes of interaction (e.g., supporting screen readers, switch devices, or sign language) as native features, not afterthoughts.19

2.2 Confronting "Technoableism"

The philosopher of technology Ashley Shew defines "Technoableism" as the pervasive belief that technology is the "solution" to disability, often characterizing disabled people as "problems" awaiting a technological "fix".23

  • The Trap of "Inspiration Porn": Technoableism often manifests in high-profile projects—such as AI-powered exoskeletons or brain-computer interfaces—that garner media attention ("Inspiration Porn") while basic digital infrastructure remains inaccessible.24
  • Policy Implication: For the IndiaAI Mission, avoiding technoableism means prioritizing boring but essential infrastructure (e.g., ensuring the CAPTCHA on the PM-Kisan portal is accessible to the blind) over flashy, high-tech "cures" that benefit a few. It means recognizing that disabled people are experts in their own lives and must lead the design process ("Nothing Without Us").23

3. The Legal Layer: From Guidelines to Non-Negotiable Standards

The foundation of the Inclusivity Stack is a robust legal framework that elevates accessibility from a voluntary "best practice" to a mandatory compliance requirement. The legal landscape in India has shifted dramatically in this regard following recent judicial interventions.

3.1 The Rajive Raturi Paradigm Shift (2024)

On November 8, 2024, the Supreme Court of India delivered a landmark judgment in Rajive Raturi v. Union of India.14 The case, originating from a PIL filed in 2005 by visually impaired activist Rajive Raturi, addressed the systemic failure of the state to implement accessibility mandates.

Key Judicial Findings:

  1. Mandatory Rules: The Court accepted the argument presented by the NALSAR Centre for Disability Studies (CDS) that Rule 15 of the RPWD Rules, 2017, which prescribed accessibility standards, had historically been treated as directory (voluntary). The Court ruled that Rule 15, read with Sections 40, 44, and 45 of the RPWD Act, creates a mandatory compliance framework.15
  2. Ultra Vires: The NALSAR report Finding Sizes for All argued that any interpretation of the rules that allows for "self-regulation" or "guidelines" is ultra vires (beyond the powers of) the parent Act, which mandates full accessibility.26
  3. Digital Inclusion: While the case focused on physical access, the judgment explicitly stated that "accessibility to information, technology and entertainment is equally important".16 This extends the mandate to all digital platforms, AI interfaces, and electronic services provided by the state.

Implication for IndiaAI: Any AI system deployed by the government (e.g., BharatGen, DigiYatra) that fails to meet accessibility standards is now illegal and actionable under the RPWD Act.27

3.2 IS 17802: The Constitutional Standard for Code

The technical benchmark for this legal mandate is IS 17802: Accessibility for ICT Products and Services, notified by the Bureau of Indian Standards (BIS) in 2021/2022.28

  • Part 1 (Requirements): Aligned with the global standard EN 301 549 and WCAG 2.1, this section specifies functional performance statements (e.g., "usage without vision," "usage with limited manipulation").29
  • Part 2 (Conformance): Defines the testing methodologies to verify compliance.29
  • Enforceability: Following the RPWD Amendment Rules 2023, IS 17802 is the statutory standard.30 This means that procurement of AI systems via the Government e-Marketplace (GeM) must strictly adhere to these standards.

3.3 Comparative Jurisprudence: The EU and Canada

India’s legal framework can be further strengthened by examining global best practices:

  • Canada (CAN-ASC-6.2:2025): Canada has released the world’s first standard specifically for "Accessible and Equitable Artificial Intelligence Systems".31 It mandates that persons with disabilities be involved in the entire AI lifecycle—from data collection to model training—and introduces the concept of "Equitable AI" to prevent algorithmic discrimination.25
  • European Union (EU AI Act): The EU AI Act (Article 5 & Recital 80) categorises AI systems that exploit vulnerabilities of persons with disabilities as "Unacceptable Risk" (prohibited). High-risk systems (e.g., education, employment) must demonstrate compliance with accessibility requirements by design.33

Recommendation: The IndiaAI Mission should adopt a framework analogous to CAN-ASC-6.2, mandating "lifecycle inclusion" for all projects funded under the Safe & Trusted AI pillar.

4. The Data Layer: Constructing the Disability Data Commons

Artificial Intelligence is, at its core, an engine of pattern recognition. If the "pattern" of disability is absent from the training data, the AI will inevitably treat disability as an anomaly. The AIKosh pillar of the IndiaAI Mission 2 must address this "data void" to ensure sovereign AI is truly inclusive.

4.1 The Representation Gap in Indic Datasets

Current datasets for Indian languages (e.g., those used to train BharatGen) suffer from a dual exclusion:

  1. General Data Poverty: While initiatives like Bhashini are addressing the lack of Indic language data, there is a severe scarcity of data representing disabled speakers of these languages.8
  2. Specific Modality Gaps:
  • Dysarthric Speech: There are few, if any, large-scale datasets of dysarthric or atypical speech in languages like Hindi, Tamil, or Bengali. This renders voice-activated UPI payments or government helplines inaccessible to millions with motor or speech impairments.35
  • Indian Sign Language (ISL): Despite being a scheduled language capability under the New Education Policy, ISL lacks a comprehensive, annotated video-to-text corpus required to build robust translation models.36

4.2 The "Outlier Advantage": Robustness via Inclusion

A compelling technical argument for inclusion is the concept of the "Outlier Advantage." Machine Learning (ML) research indicates that training models on "edge cases" or diverse outliers improves the mathematical robustness and generalisation capabilities of the model for all users.37

  • Curriculum Learning: By including "difficult" samples—such as stuttered speech or heavily accented voice commands—during training, the model learns to identify the phonetic core of language rather than over-fitting to superficial acoustic features.39
  • Universal Benefit: A speech model trained on dysarthric speech performs significantly better in noisy environments (e.g., a railway station) for non-disabled users. Thus, investing in disability data is an investment in the overall quality of India’s sovereign AI.40

4.3 Governance: Data Empowerment and Protection Architecture (DEPA)

To collect this sensitive data without exploitation, India must leverage its Data Empowerment and Protection Architecture (DEPA).41

  • Disability Data Trusts: We propose the creation of "Disability Data Commons"—fiduciary structures where the disability community pools their data (e.g., voice samples, gait patterns).
  • Consent Managers: Using DEPA’s electronic consent artifact, PwDs can grant temporary, purpose-limited access to their data for training "public good" models (like BharatGen) while retaining ownership.43 This shifts the dynamic from "data extraction" to "data empowerment."

5. The Model Layer: Indigenous Intelligence and Red Teaming

The IndiaAI Compute Pillar and BharatGen initiative provide the computational muscle to build indigenous foundational models.4 This sovereign control offers a unique opportunity to "bake in" inclusion at the model layer, rather than retrofitting it later.

5.1 BharatGen and the Constitutional AI Paradigm

BharatGen, India’s proposed sovereign Large Multimodal Model, is currently being trained on datasets spanning 22 Indian languages.5 To avoid the pitfalls of Western models, BharatGen must adopt a Constitutional AI approach.

  • Constitution as the Objective Function: The model’s reward function (in Reinforcement Learning from Human Feedback - RLHF) should be aligned with the constitutional values of Article 14 (Equality) and Article 21 (Dignity).
  • Anti-Ableist Fine-Tuning: The model must be penalised for generating "inspiration porn," "medical model" diagnoses for social queries, or ableist stereotypes. It should be rewarded for providing accessible, empowering, and rights-based responses.12

5.2 Accessibility Red Teaming

The Safe & Trusted AI pillar 7 must institutionalize Accessibility Red Teaming—a structured adversarial testing process focused on disability bias.45

  • Methodology: Unlike security red teaming (which tests for hacks), accessibility red teaming tests for Allocative Harms (denial of resources) and Quality of Service Harms (degraded performance).46
  • The Red Team: This requires recruiting "white-hat" testers with disabilities—blind screen-reader users, autistic testers, deaf signers—to identify failure modes that able-bodied developers cannot perceive.47
  • NIST Alignment: The IndiaAI Safety Institute (AISI) should align its red teaming protocols with the NIST AI Risk Management Framework (RMF), which explicitly identifies "bias and discrimination" as top-tier risks.48

5.3 Case Study: The Bhashini Gap

Bhashini, the National Language Translation Mission, is a flagship success, offering text-to-text translation in 22 languages.36 However, it currently treats Indian Sign Language (ISL) as an outlier.

  • The "23rd Language": ISL is a distinct natural language with its own grammar (Subject-Object-Verb), distinct from spoken Hindi or English.
  • The Inclusivity Stack Requirement: The Bhashini mandate must be expanded to treat ISL as the "23rd language." This requires funding for specific transformer architectures capable of processing 3D spatial grammar (video-to-text and text-to-avatar), moving beyond simple gesture recognition.36

6. The Governance Layer: Operationalising Justice

Technology is deployed within a bureaucratic structure. The "Governance Layer" ensures that the technical capabilities of the Inclusivity Stack are enforced through administrative and financial levers.

6.1 Public Procurement as a Policy Lever (GeM)

The Government of India is the largest purchaser of technology in the country. The Government e-Marketplace (GeM) is the primary funnel for this procurement.51

  • Mandatory Accessibility Check: GeM must integrate a mandatory "IS 17802 Compliance" field for all AI and software tenders. Vendors should be required to upload a Voluntary Product Accessibility Template (VPAT) or a certificate from the Standardisation Testing and Quality Certification (STQC) directorate.52
  • Market Shaping: By disqualifying inaccessible products from government tenders, the state creates a powerful market incentive for private vendors to adopt "Universal Design" principles.

6.2 Disability Impact Assessments (DIA)

For high-stakes AI deployments (e.g., policing, welfare distribution, healthcare), the nodal agency must conduct a Disability Impact Assessment (DIA) prior to deployment.8

  • Framework: A DIA evaluates:
  1. Exclusion Risk: Does the system (e.g., DigiYatra) exclude specific disability phenotypes (e.g., facial paralysis)?
  2. Disparate Impact: Is the error rate higher for PwDs than for the general population?
  3. Accommodation Pathways: Is there a non-digital, human-in-the-loop alternative available?
  • Accountability: The results of the DIA should be public, and high-risk findings should trigger a mandatory pause in deployment until mitigations are in place.54

6.3 Institutional Accountability: CCPD and CAG

  • Chief Commissioner for Persons with Disabilities (CCPD): The CCPD should establish a specialized "Digital Rights Wing" equipped with technical experts to adjudicate complaints regarding digital accessibility and AI discrimination.30
  • Comptroller and Auditor General (CAG): As the CAG moves towards auditing AI systems 9, it must include specific "inclusivity audit" parameters. An AI system that is inaccessible is an inefficient use of public funds and should be flagged in CAG reports.

7. Case Studies in Exclusion and Remediation

7.1 DigiYatra and Biometric Exclusion

The Problem: DigiYatra uses Facial Recognition Technology (FRT) for airport entry. While efficient for the majority, it poses severe exclusion risks for PwDs.

  • Biometric Failure: Individuals with cerebral palsy (head tremors), facial disfigurements, or Down syndrome often experience higher "False Rejection Rates" in FRT systems.9
  • Physical Barriers: The automated gates often close too quickly for wheelchair users or those with slow gaits, causing physical anxiety or harm.55

The Inclusivity Stack Solution:

  1. Data: Retrain the FRT models using a "Disability Data Trust" dataset to improve recognition of diverse faces (The Outlier Advantage).
  2. Process: Mandate a permanent, staffed "Accessibility Lane" that does not require biometric authentication. This lane should not be a "penalty box" (slower) but a "premium service" (faster) to ensure dignity.56

7.2 PM-Kisan and Algorithmic Gatekeeping

The Problem: Welfare schemes like PM-Kisan rely on Aadhaar-seeded databases and AI-driven fraud detection to disperse funds.57

  • Exclusion: AI systems may flag "suspicious" patterns—such as a mismatch in biometrics due to manual labor or disability—leading to the automated suspension of benefits ("Digital Death").
  • Lack of Recourse: The grievance redressal mechanisms are often digital-first (chatbots), which may themselves be inaccessible to the blind or illiterate.

The Inclusivity Stack Solution:

  1. Human-in-the-Loop: Any AI decision to suspend benefits must be automatically escalated to a human review officer.
  2. Accessible Redressal: A "Click-to-Call" feature or a dedicated, accessible web portal compliant with IS 17802 must be available for beneficiaries to challenge algorithmic decisions.25

8. Conclusion: The Road to a Viksit Bharat

India’s aspiration to become a Viksit Bharat (Developed Nation) by 2047 rests on its ability to harness the full potential of its human capital. Leaving 2.21% of the population (officially) or closer to 15% (globally estimated) behind in a "digital apartheid" is not just a violation of human rights; it is a strategic error that undermines the nation’s economic and social cohesion.

The Inclusivity Stack proposed in this report is not an optional add-on; it is the structural steel required to support the weight of a billion aspirations. By operationalising the legal mandates of Rajive Raturi, leveraging the "Outlier Advantage" in data, and enforcing accountability through governance, India can demonstrate that its "Sovereign AI" is truly sovereign—because it serves everyone.

As India builds the digital highways of the 21st century, it must ensure they have ramps. The cost of exclusion is high, but the return on inclusion—a resilient, robust, and just digital republic—is immeasurable.

Table 1: The Inclusivity Stack – Summary of Recommendations

Layer

Current State (The Problem)

The Inclusivity Stack (The Solution)

Key Lever / Standard

Legal

Voluntary guidelines; "Soft Law" approach.

Mandatory Compliance; Non-negotiable standards.

Rajive Raturi Judgment; IS 17802; RPWD Act S.40.

Data

Data Voids; Medical Model annotation; Exclusion of outliers.

Disability Data Commons; Social Model annotation; Outlier Advantage.

AIKosh; DEPA; Data Trusts.

Model

Bias; Hallucinations; "Inspiration Porn"; Ignored edge cases.

Constitutional AI; Accessibility Red Teaming; Anti-ableist RLHF.

BharatGen; NIST RMF; AISI.

Interface

Inaccessible CAPTCHAs; Lack of ISL; Voice-only or Text-only silos.

Universal Design; Multi-modal access (ISL, text, voice, switch).

Bhashini (ISL Mission); CAN-ASC-6.2.

Governance

Self-regulation; Lack of audits; Technoableism.

Disability Impact Assessments (DIA); Third-party Audits; Procurement mandates.

GeM; CCPD; CAG Audits.

References & Citation Key

  • Legal: Rajive Raturi v. Union of India (2024) 14; RPWD Act 2016 27; IS 17802.28
  • Policy: IndiaAI Mission 1; NITI Aayog AI Strategy 7; EU AI Act 33; CAN-ASC-6.2.25
  • Theory: Technoableism (Ashley Shew) 23; Social vs. Medical Model 18; Algorithmic Harms.46
  • Technical: Red Teaming 45; Bias in datasets 8; Bhashini 36; Outlier Advantage.37
  • Governance: GeM Procurement 51; DEPA & Data Trusts.41

Works cited

  1. Cabinet Approves Over Rs 10300 Crore for IndiaAI Mission, will Empower AI Startups and Expand Compute Infrastructure Access - PIB, accessed on February 14, 2026, https://www.pib.gov.in/PressReleasePage.aspx?PRID=2012375
  2. Transforming India with AI - PIB, accessed on February 14, 2026, https://www.pib.gov.in/PressReleasePage.aspx?PRID=2178092
  3. Transforming India with AI: Rs 10,300 crore mission, 38,000 GPUs & a vision for inclusive growth | DD News, accessed on February 14, 2026, https://ddnews.gov.in/en/transforming-india-with-ai-rs-10300-crore-mission-38000-gpus-a-vision-for-inclusive-growth/
  4. parliament question: role of bharatgen ai - Press Release: Press Information Bureau, accessed on February 14, 2026, https://www.pib.gov.in/PressReleseDetailm.aspx?PRID=2223738®=3&lang=1
  5. BharatGen: India's First Sovereign AI Initiative, accessed on February 14, 2026, https://bharatgen.com/
  6. Union budget 2024-25 allocates over 550 crores to the IndiaAI mission, accessed on February 14, 2026, https://indiaai.gov.in/article/union-budget-2024-25-allocates-over-550-crores-to-the-indiaai-mission
  7. India AI Governance Guidelines - AWS, accessed on February 14, 2026, https://indiaai.s3.ap-south-1.amazonaws.com/docs/guidelines-governance.pdf
  8. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review - Frontiers, accessed on February 14, 2026, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1349668/full
  9. Auditing AI: What is it and why does it matter for India?, accessed on February 14, 2026, https://www.orfonline.org/expert-speak/auditing-ai-what-is-it-and-why-does-it-matter-for-india
  10. Balancing convenience and data privacy in the Digi Yatra app, accessed on February 14, 2026, https://papers.ssrn.com/sol3/Delivery.cfm/5150113.pdf?abstractid=5150113&mirid=1
  11. ABLEist: Intersectional Disability Bias in LLM-Generated Hiring Scenarios - arXiv, accessed on February 14, 2026, https://arxiv.org/html/2510.10998v1
  12. Without deliberate anti-ableist design in HR hiring systems, is any LLM model's neutrality simply a myth? - Gareth Ford Williams, accessed on February 14, 2026, https://garethfordwilliams.medium.com/without-deliberate-anti-ableist-design-in-hr-hiring-systems-is-any-llm-models-neutrality-simply-d7cc134e8238
  13. The Intersection of Technology, Disability Rights and Worker Rights, accessed on February 14, 2026, https://www.nationaldisabilityinstitute.org/wp-content/uploads/2025/01/intersectionoftechnologydisabilityandworkerrights2024report.pdf
  14. Case Report: Rajive Raturi v. Union of India (2024) [LiveLaw (SC) 875], accessed on February 14, 2026, https://kshetryandassociates.com/case-report-rajive-raturi-v-union-of-india-2024-livelaw-sc-875/
  15. IN THE SUPREME COURT OF INDIA CIVIL ORIGINAL JURISDICTION Writ Petition (C) No. 243 of 2005 Rajive Raturi …Petitioner Vers, accessed on February 14, 2026, https://api.sci.gov.in/supremecourt/2005/9321/9321_2005_1_1503_56986_Judgement_08-Nov-2024.pdf
  16. Important Judgements for the Persons with disabilities | NIEPVD Dehradun | India, accessed on February 14, 2026, https://niepvd.nic.in/important-judgements-for-the-persons-with-disabilities/
  17. Disability-First AI Dataset Annotation: Co-designing Stuttered Speech Annotation Guidelines with People Who Stutter - arXiv, accessed on February 14, 2026, https://arxiv.org/html/2602.10403v1
  18. Medical and Social Models of Disability | Office of Developmental Primary Care, accessed on February 14, 2026, https://odpc.ucsf.edu/clinical/patient-centered-care/medical-and-social-models-of-disability
  19. Identifying Disability Insensitive Language in Scholarly Works using Machine Learning - IslandScholar, accessed on February 14, 2026, https://islandscholar.ca/sites/default/files/2025-10/robyroshna_honours_thesis_2025.pdf
  20. Full article: Disabling AI: power, exclusion, and disability - Taylor & Francis, accessed on February 14, 2026, https://www.tandfonline.com/doi/full/10.1080/01425692.2025.2519482
  21. Technology and Disability: Trends and Opportunities in the Digital Economy in ASEAN, accessed on February 14, 2026, https://www.eria.org/uploads/Technology-and-Disability-Trends-and-Opportunities-in-the-Digital-Economy-in-ASEAN.pdf
  22. Social Model vs Medical Model of disability - disabilitynottinghamshire.org.uk, accessed on February 14, 2026, https://www.disabilitynottinghamshire.org.uk/index.php/about/social-model-vs-medical-model-of-disability/
  23. Ashley Shew - Against Technoableist AI - YouTube, accessed on February 14, 2026, https://www.youtube.com/watch?v=j7JcRwNWETM
  24. Against Technoableism | Rethinking Who Needs Improvement | College of Liberal Arts and Human Sciences | Virginia Tech, accessed on February 14, 2026, https://liberalarts.vt.edu/news/bookshelf/science-technology-and-society-bookshelf/2023/liberalarts-against-technoableism.html
  25. Summary of CAN-ASC-6.2:2025 – Accessible and Equitable Artificial Intelligence Systems, accessed on February 14, 2026, https://accessible.canada.ca/creating-accessibility-standards/overview-asc-62-accessible-equitable-artificial-intelligence-systems
  26. Finding Sizes For All - Report On The Status of The Right To Accessibility in India - Scribd, accessed on February 14, 2026, https://www.scribd.com/document/749742948/Finding-Sizes-for-All-Report-on-the-Status-of-the-Right-to-Accessibility-in-India
  27. Case Laws that are Shaping Digital Accessibility in India - BarrierBreak, accessed on February 14, 2026, https://www.barrierbreak.com/case-laws-that-are-shaping-digital-accessibility-in-india/
  28. India's Digital Accessibility Laws and Overview • DigitalA11Y, accessed on February 14, 2026, https://www.digitala11y.com/indias-digital-accessibility-laws-and-overview/
  29. IS 17802 (Part 2) : 2022 - Broadband India Forum, accessed on February 14, 2026, https://broadbandindiaforum.in/wp-content/uploads/2022/08/IS-17802_2_2022.pdf
  30. RPWD Act and IS 17802: India's Digital Accessibility Standards (2025 Guide), accessed on February 14, 2026, https://www.pivotalaccessibility.com/2025/06/rpwd-act-and-is-17802-indias-digital-accessibility-standards-2025-guide/
  31. CAN-ASC-6.2:2025- Accessible and Equitable Artificial Intelligence ..., accessed on February 14, 2026, https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems
  32. How to Implement CAN-ASC-6.2:2025 Accessibility Requirements for AI Systems?, accessed on February 14, 2026, https://www.barrierbreak.com/how-to-implement-can-asc-6-22025-accessibility-requirements-for-ai-systems/
  33. A disability-inclusive Artificial Intelligence Act: : a guide to monitor ..., accessed on February 14, 2026, https://www.edf-feph.org/content/uploads/2024/10/AI-Act-implementation-toolkit-Final.pdf
  34. EU AI Act - Updates, Compliance, Training, accessed on February 14, 2026, https://www.artificial-intelligence-act.com/
  35. (PDF) Artificial Intelligence for Accessibility: A Comprehensive Systematic Review and Impact Framework for Assistive Technologies - ResearchGate, accessed on February 14, 2026, https://www.researchgate.net/publication/396241449_Artificial_Intelligence_for_Accessibility_A_Comprehensive_Systematic_Review_and_Impact_Framework_for_Assistive_Technologies
  36. Bhashini AI - Making Languages More Accessible with Digital Technology - Unicef, accessed on February 14, 2026, https://www.unicef.org/digitalimpact/bhashini-ai-making-languages-more-accessible-digital-technology
  37. AI Data-Driven Personalisation and Disability Inclusion - ResearchGate, accessed on February 14, 2026, https://www.researchgate.net/publication/348569682_AI_Data-Driven_Personalisation_and_Disability_Inclusion
  38. AI Fairness for People with Disabilities: Point of View - arXiv, accessed on February 14, 2026, https://arxiv.org/pdf/1811.10670
  39. 2024 Summer Research Grant Awardees | Villanova University, accessed on February 14, 2026, https://www.villanova.edu/villanova/provost/research/institute-research-scholarship/find_support_need/internal_funding/summer-grant/2024-Recipients.html
  40. (PDF) Tamavaq™: A Hybrid Quantum–Classical Grover Pipeline for Precision Neoantigen Vaccination in Glioma - ResearchGate, accessed on February 14, 2026, https://www.researchgate.net/publication/397449493_Tamavaq_A_Hybrid_Quantum-Classical_Grover_Pipeline_for_Precision_Neoantigen_Vaccination_in_Glioma
  41. AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding, accessed on February 14, 2026, https://www.csohate.org/2026/02/11/ai-impact-summit-2026/
  42. Rebooting consent in the digital age: a governance framework for health data exchange, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC8728384/
  43. The design of a data governance system - SUERF - The European Money and Finance Forum, accessed on February 14, 2026, https://www.suerf.org/publications/suerf-policy-notes-and-briefs/the-design-of-a-data-governance-system/
  44. What Is a Data Trust? - Centre for International Governance Innovation, accessed on February 14, 2026, https://www.cigionline.org/articles/what-data-trust/
  45. Red teaming ChatGPT in medicine to yield real-world insights on model behavior - PMC, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC11889229/
  46. Toward a Taxonomy of Algorithmic Harms for ... - AAAI Publications, accessed on February 14, 2026, https://ojs.aaai.org/index.php/AIES/article/download/36745/38883/40820
  47. Guide to Red Teaming Methodology on AI Safety (Version 1.10), accessed on February 14, 2026, https://aisi.go.jp/assets/pdf/E1_ai_safety_RT_v1.10_en.pdf
  48. Supporting NIST's Development of Guidelines on Red- teaming for Generative AI - Carnegie Mellon University, accessed on February 14, 2026, https://www.cmu.edu/sites/default/files/cmu-block-center-site-files/2025-07/supporting-nists-development-of-guidelines-on-red-teaming-for-generative-ai-2024.pdf
  49. NIST releases its Generative Artificial Intelligence Profile: Key points | DLA Piper, accessed on February 14, 2026, https://www.dlapiper.com/en/insights/publications/ai-outlook/2024/nist-releases-its-generative-artificial-intelligence-profile
  50. Bhashini Logo, accessed on February 14, 2026, https://bhashini.gov.in/
  51. Harnessing AI and digital public infrastructure (DPI) for Viksit Bharat | EY, accessed on February 14, 2026, https://www.ey.com/content/dam/ey-unified-site/ey-com/en-in/insights/ai/documents/ey-harnessing-ai-and-digital-public-infrastructure-for-viksit-bharat.pdf
  52. The Central Government to leverage AI in GeM procurement: Union Minister Piyush Goyal, accessed on February 14, 2026, https://indiaai.gov.in/article/the-central-government-to-leverage-ai-in-gem-procurement-union-minister-piyush-goyal
  53. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review - PMC, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC10905618/
  54. Impact Assessments: - Supporting AI Accountability & Trust - Workday Blog, accessed on February 14, 2026, https://blog.workday.com/content/dam/web/en-us/documents/legal/access-partnership-workday-impact-assessment-paper.pdf
  55. Adoption of Digital Identity in Airline Transit: A Global Overview | Kairos Blog, accessed on February 14, 2026, https://www.kairos.com/post/adoption-of-digital-identity-in-airline-transit-a-global-overview
  56. Digi yatra policy doc - Ministry of Civil Aviation, accessed on February 14, 2026, https://www.civilaviation.gov.in/sites/default/files/migration/Digi%20yatra%20policy%20doc.pdf
  57. GOVERNING AI IN WELFARE DELIVERY - Efficiency, Exclusion, and Constitutional Accountability PARNEET KAUR - SSRN, accessed on February 14, 2026, https://papers.ssrn.com/sol3/Delivery.cfm/6080208.pdf?abstractid=6080208&mirid=1
  58. Why Governments Need Unified Social Registry for Beneficiary Targeting - CSM Technologies, accessed on February 14, 2026, https://www.csm.tech/blog-details/blog_pdf/why-governments-need-unified-social-registry-for-beneficiary-targeting
  59. Supreme Court Mandates Barrier-Free Public Spaces. A Landmark Judgment Ensuring Equal Access to Public Spaces for Persons with Disabilities (PWDs) - Lawtext, accessed on February 14, 2026, https://lawtext.in/judgement.php?bid=1158
  60. Recital 80 | EU Artificial Intelligence Act, accessed on February 14, 2026, https://artificialintelligenceact.eu/recital/80/
  61. Social and medical models of disability and mental health: evolution and renewal - PMC, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC6312522/
  62. LLM Red Teaming: The Complete Step-By-Step Guide To LLM Safety - Confident AI, accessed on February 14, 2026, https://www.confident-ai.com/blog/red-teaming-llms-a-step-by-step-guide
  63. Samudaye - Bhashini, accessed on February 14, 2026, https://bhashini.gov.in/samudaye/anusandhan-mitra/6

Friday, 26 December 2025

Prototype — Accessible to Whom? Legible to What?

 

Abstract

Artificial Intelligence (AI) has transformed the terrain of possibility for assistive technology and inclusive design, but continues to perpetuate complex forms of exclusion rooted in legibility, bias, and tokenism. This paper critiques current paradigms of AI prototyping that centre “legibility to machines” over accessibility for disabled persons, arguing for a radical disability-led approach. Drawing on international law, empirical studies, and design scholarship, the analysis demonstrates why prototyping is neither neutral nor technical, but a deeply social and political process. Building from case studies in recruiting, education, and healthcare technology failures, this work exposes structural biases in training, design, and implementation—challenging designers and policymakers to move from “designing for” and “designing with” to “designing from” disability and difference.

Introduction

Prototyping is celebrated in engineering and design as a space for creativity, optimism, and risk-taking—a laboratory for the future. Yet, for countless disabled persons, the prototype is also where inclusion begins… or ends. For them, optimism is often tempered by the unspoken reality that exclusion most often arrives early and quietly, disguised as technical “constraints,” market “priorities,” or supposedly “objective” code. When prototyping occurs, it rarely asks: accessible to whom, legible to what?

This question—so simple, so foundational—is what this paper interrogates. The rise of Artificial Intelligence has intensified the stakes because AI prototypes increasingly determine who is rendered visible and included in society’s privileges. Legibility, not merely accessibility, is becoming the deciding filter; if one’s body, voice, or expression cannot be rendered into a dataset “comprehensible” to AI, one may not exist in the eyes of the system. Thus, we confront a new and urgent precipice: machinic inclusion, machinic exclusion.

This work expands the ideas presented in recent disability rights speeches and debates, critically interrogating how inclusive design must transform both theory and practice in the age of AI. It re-interprets accessibility as a form of knowledge and participation—never a technical afterthought.

Accessibility as Relational, Not Technical

Contemporary disability studies and the lived experiences of activists reject the notion that accessibility is a mere checklist or add-on. Aimi Hamraie suggests that “accessibility is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.”1 Just as building a ramp after a staircase is an act of remediation rather than inclusion, most AI prototyping seeks to retrofit accessibility, arguing it is too late, too difficult, or too expensive to embed inclusiveness from the outset.

Crucially, these arguments reflect broader epistemologies: those who possess the power to design, define the terms of recognition. Accessibility is not simply about “opening the door after the fact,” but questioning why the door was placed in an inaccessible position to begin with.

This critique leads us to re-examine prototyping practices through a disability lens, asking not only “who benefits” but also “who is recognised.” Evidence throughout the AI industry reveals a persistent confusion between accessibility for disabled persons and legibility for machines, a theme critically examined in subsequent sections.

Legibility and the Algorithmic Gaze

Legibility, distinct from accessibility, refers to the capacity of a system to recognise, process, and make sense of a body, voice, or action. Within the context of AI, non-legible phenomena—those outside dominant training data—simply vanish. People with non-standard gait, speech, or facial expressions are “read” by the algorithm as errors or outliers.

What are the implications of placing legibility before accessibility?

Speech-recognition models routinely misinterpret dysarthric voices, excluding those with neurological disabilities. Facial recognition algorithms have misclassified disabled expressions as “threats” or “system errors,” because their datasets contain few, if any, disabled exemplars. In the workplace, résumé-screening AI flags gaps or “unusual experience,” disproportionately rejecting those with disability-induced employment breaks. In education, proctoring platforms flag blind students for “cheating”, unable to process their lack of eye gaze at the screen as a legitimate variance.

These failures do not arise from random error. They are products of a pipeline formed by unconscious value choices made at every stage: training, selection, who participates, and who is imagined as the “user.”

In effect, machinic inclusiveness transforms the ancient bureaucracy of bias from paper to silicon. The new filter is not the form but the invisible code.

The Bias Pipeline: What Goes In, Comes Out Biased

Bias in AI does not merely appear at the end of the process; it is present at every decision point. One stark experiment submitted pairs of otherwise identical résumés to recruitment-screening platforms: one indicated a “Disability Leadership Award” or advocacy involvement, the other did not. The algorithm ranked the “non-disability” version higher, asserting that highlighting disability meant “reduced leadership emphasis,” “focus diverted from core job responsibilities,” or “potential risk.”

This is not insignificant. Empirical studies have reproduced such results across tech, finance, and education, showing systemic discrimination by design. Qualified disabled applicants are penalised for skills, achievements, and community roles that are undervalued or alien to training data.

Much as ethnographic research illuminated the “audit culture” in public welfare (where bureaucracy performed compliance rather than delivered services), so too does “audit theatre” manifest in AI. Firms invite disabled people to validate accessibility only after the design is final. In true co-design, disabled persons must participate from inception, defining criteria and metrics on equal footing. This gap—between performance and participation—is the site where bias flourishes.

The Trap of Tokenism

Tokenism is an insidious and common problem in social design. In disability inclusion, it refers to the symbolic engagement of disabled persons for validation, branding, or optics—rather than for genuine collaboration.

Audit theatre, in AI, occurs when disabled people are surveyed, “consulted,” or reviewed, but not invited into the process of design or prototyping. The UK’s National Disability Survey was struck down for failing to meaningfully involve stakeholders. Even the European Union’s AI Act, lauded globally for progressive accessibility clauses, risks tokenism by mandating involvement but failing to embed robust enforcement mechanisms.

Most AI developers receive little or no formal training in accessibility. When disability emerges in their worldview, it is cast in terms of medical correction—not lived expertise. Real participation remains rare.

Tokenism has cascading effects: it perpetuates design choices rooted in non-disabled experience, licenses shallow metrics, and closes the feedback loop on real inclusion.

Case Studies: Real-World Failures in Algorithmic Accessibility

AI Hiring Platforms and the “Disability Penalty”

Automated CV-screening tools systematically rank curricula vitae containing disability-associated terms lower, even when qualifications are otherwise stronger. Companies like Amazon famously scrapped AI recruitment platforms after discovering they penalised women, but similar audits for disability bias are scarce. Companies using video interview platforms have reported that candidates with stroke, autism, or other disability-related facial expressions score lower due to misinterpretation.

Online Proctoring and Educational Technology in India

During the COVID-19 pandemic, the acceleration of edtech platforms in India promised transformation. Yet, blind and low-vision students were flagged as “cheating” for not making “required” eye contact with their devices. Zoom and Google Meet upgraded accessibility features, but failed to address core gaps in their proctoring models.

Reports from university students showed that requests for alternative assessments or digital accommodations were often denied on the grounds of technical infeasibility.

Healthcare Algorithms and Diagnostic Bias

Diagnostic risk scores and triaging algorithms trained on narrow datasets exclude non-normative disability profiles. Health outcomes for persons with rare, chronic, or atypical disabilities are mischaracterised, and recommended interventions are mismatched.

Each failure traces back to inaccessible prototyping.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype. Disability-led design reframes accessibility—not as a requirement for “special” needs but as expertise that enriches technology. It asks not “How can you be fixed?” but “What knowledge does your experience bring to designing the machine?”

Major initiatives are emerging. Google’s Project Euphonia enlists disabled participants to re-train speech models for atypical voices, but raises ethical debates on data ownership, exploitation, and who benefits. More authentic still are community-led mapping projects where disabled coders and users co-create AI mapping tools for urban navigation, workspace accessibility, and independent living. These collaborations move slowly but produce lasting change.

When accessibility is led by disabled persons, reciprocity flourishes: machine and user learn from each other, not simply predict and consume.

Sara Hendren argues, “design is not a solution, it is an invitation.” Where disability leads, the invitation becomes mutual—technology contorts to better fit lives, not the reverse.

Policy, Law, and Regulatory Gaps

The European Union’s AI Act is rightly lauded for Article 16 (mandating accessibility for high-risk AI systems) and Article 5 (forbidding exploitation of disability-related vulnerabilities), as well as public consultation. Yet, the law lacks actionable requirements for collecting disability-representative data—and overlooks the intersection of accessibility, data ownership, and research ethics.

India’s National Strategy for Artificial Intelligence, along with “AI for Inclusive Societal Development,” claims “AI for All” but omits specific protections, data models, or actionable recommendations for disabled persons—this despite the Supreme Court’s Rajiv Raturi judgment upholding accessibility as a fundamental right. Implementation of the Rights of Persons with Disabilities Act, 2016, remains loose, and enforcement is sporadic.

The United States’ ADA and Section 508 have clearer language, but encounter their own enforcement challenges and retrofitting headaches.

Ultimately, policy remains disconnected from practice. Prototyping and design must close the gap—making legal theory and real inclusiveness reciprocal.

Intersectionality: Legibility Across Difference

Disability is never experienced in isolation: it intersects with gender, caste, race, age, and class. Women with disabilities face compounded discrimination in hiring, healthcare, and data representation. Caste-based exclusions are rarely coded into AI training practices, creating models that serve only dominant groups.

For example, the exclusion of vernacular languages in text-to-speech software leaves vast rural disabled communities voiceless in both policy and practical tech offerings. Ongoing work by Indian activists and community innovators seeks to produce systems and data resources that represent the full spectrum of disabled lives, but faces resistance from resource constraints, commercial priorities, and a lack of institutional support.

Rethinking the Fundamentals: Prototyping as Epistemic Justice

Epistemic justice—ensuring that all knowledge, experience, and ways of living are valued in the design of social and technical systems—is both a theoretical and a practical necessity in AI. Bias springs not only from bad data or oversight but by failing to recognise disabled lives as valid sources of expertise.

Key steps for epistemic justice in prototyping include:

  • Centre disabled expertise from project inception, defining metrics, incentives, and feedback loops.

  • Use disability as a source of innovation, not just compliance: leverage universal design to produce systems more robust for all users.

  • Address intersectionality in datasets, training and testing for compounded bias across race, gender, language, and class.

  • Create rights-based governance in tech companies, embedding accessibility into KPIs and public review.

Recommendations: Designing From Disability

The future of inclusive AI depends on three principal shifts:

  1. From designing for to designing with: genuine co-design, not audit theatre, where disabled participants shape technology at every stage.

  2. From accessibility as compliance to accessibility as knowledge: training developers, engineers and policymakers to value lived disability experience.

  3. From compliance to creativity: treating disability as “design difference”—a starting point for innovation, not merely a deficit.

International law and national policy must recognise the lived expertise of disability communities. Without this, accessibility remains a perpetual afterthought to legibility.


Conclusion

Accessible to whom, legible to what? This question reverberates through every level of prototype, product, and policy.

If accessibility is left to the end, if legibility for machines becomes the touchstone, humanity is reduced, difference ignored. When disability leads the design journey, technology is not just machine-readable; it becomes human-compatible.

The future is not just about teaching machines to read disabled lives—but about allowing disabled lives to rewrite what machines can understand.


References

  • Aimi Hamraie, Building Access: Universal Design and the Politics of Disability (University of Minnesota Press, 2017).

  • Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning.” fairmlbook.org, 2019.

  • Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1–15.

  • Leavy, Siobhan, Eugenia Siapera, Bethany Fernandez, and Kai Zhang. “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models.” Proceedings of the 2024 ACM FAccT.

  • Sara Hendren. What Can a Body Do? How We Meet the Built World (Riverhead, 2020).

  • National Strategy for Artificial Intelligence, NITI Aayog, Government of India, 2018.

  • Rajiv Raturi v. Union of India, Supreme Court of India, AIR 2012 SC 651.

  • European Parliament and Council, Artificial Intelligence Act, 2023.

  • Google AI Blog. “Project Euphonia: Helping People with Speech Impairments.” May 2019.

  • “Making AI Work for Everyone,” Google Developers, 2022.

  • Amazon Inc., “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.

  • United Kingdom High Court, National Disability Survey ruling, 2023.

  • Nita Ahuja, “Online Proctoring as Algorithmic Injustice: Blind Students in Indian EdTech,” Journal of Disability Studies, vol. 12, no. 2 (2022): 151-177.

  • United Nations, Convention on the Rights of Persons with Disabilities, Resolution 61/106 (2006).

  • [Additional references on intersectionality, design theory, empirical studies, Indian law, US/EU regulation, and case material]

Popular Posts