Translate

Saturday, 14 February 2026

The Inclusivity Stack: Operationalising Disability Justice in India’s Sovereign AI Architecture

Inclusivity Stack: Operationalising Equity, Accessibility & Inclusion,” showing a layered pyramid representing organisational inclusion. From bottom to top, the layers read “Physical Accessibility,” “Tools & Technology,” “Policies & Processes,” and “Culture & Awareness,” with diverse disabled and non-disabled people standing on the top layer, symbolising inclusive organisational culture supported by foundational accessibility systems.
The Inclusivity Stack

Abstract

The Government of India’s strategic pivot towards "Sovereign Artificial Intelligence," crystallised in the ₹10,371 crore IndiaAI Mission, represents a watershed moment in the nation’s digital governance trajectory. As the state moves to integrate Artificial Intelligence (AI) into the foundational layer of Digital Public Infrastructure (DPI)—spanning healthcare, agriculture, and urban governance—it faces a critical architectural choice: to replicate the exclusionary patterns of the "medical model" of disability or to operationalise a "social model" that views accessibility as a non-negotiable constitutional guarantee. This report proposes the "Inclusivity Stack," a comprehensive governance and technical framework designed to embed disability justice into the IndiaAI ecosystem. Drawing extensively on the Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (2024), the Rights of Persons with Disabilities (RPWD) Act, 2016, and global best practices such as the EU AI Act and Canada’s CAN-ASC-6.2 standard, this document outlines a roadmap for "fixing" the digital environment rather than the individual. It argues that the inclusion of India’s 26.8 million persons with disabilities is not merely a moral imperative but a prerequisite for the mathematical robustness, legal validity, and economic viability of India’s sovereign AI ambitions.

1. Introduction: The Sovereign AI Moment and the Risk of Digital Apartheid

1.1 The Genesis of the IndiaAI Mission

In March 2024, the Union Cabinet approved the IndiaAI Mission with a substantial budgetary outlay of ₹10,371.92 crore, signaling India’s intent to move from being a consumer of Western AI models to a creator of indigenous, sovereign AI capabilities.1 This mission is structurally organised around seven distinct pillars, designed to democratise access to computing power and data:

  1. IndiaAI Compute Pillar: The deployment of over 38,000 Graphics Processing Units (GPUs) to provide affordable computational infrastructure to startups and researchers.2
  2. IndiaAI Application Development Initiative: Targeting critical sectors such as healthcare, agriculture, and governance.2
  3. AIKosh (Dataset Platform): A unified repository for high-quality, non-personal datasets to train indigenous models.3
  4. IndiaAI Foundation Models (BharatGen): The development of "BharatGen," a sovereign Large Multimodal Model (LMM) trained on diverse Indic languages and datasets.4
  5. IndiaAI FutureSkills: Aimed at expanding the AI talent pool through academic and vocational training.2
  6. IndiaAI Startup Financing: Venture capital support for deep-tech AI startups.6
  7. Safe & Trusted AI: A framework for responsible AI governance, including the establishment of the IndiaAI Safety Institute (AISI).7

While the mission’s scale is ambitious, aiming to catalyse a $1.7 trillion contribution to the Indian economy by 2035 2, its current architectural blueprint lacks explicit mechanisms to address the "digital apartheid" faced by Persons with Disabilities (PwDs). In a nation where internet access is already stratified by caste, class, and geography, the uncritical deployment of AI threatens to deepen these divides.

1.2 The "Data Void" and Algorithmic Exclusion

The exclusion of PwDs from the digital ecosystem is not accidental but systemic, often described as a "data void." Contemporary AI systems are predominantly trained on data that reflects the "normative" able-bodied user.

  • Speech Recognition: Models trained on standard datasets often fail to recognise dysarthric speech (common in conditions like cerebral palsy) or the vocal patterns of the deaf community.8
  • Computer Vision: Facial recognition systems, such as those used in the DigiYatra biometric boarding initiative, are frequently trained on datasets that lack representation of individuals with facial differences, Down syndrome, or palsy, leading to higher failure rates for these groups.9
  • Natural Language Processing (NLP): Large Language Models (LLMs) often hallucinate "cures" or offer patronizing advice when users disclose a disability, reflecting the biases inherent in their training corpora.11

If the IndiaAI Mission proceeds without rectifying these voids, the "Sovereign AI" infrastructure will effectively become a "Sovereign Exclusion Mechanism," automating the denial of services to the most vulnerable citizens.

1.3 The Economic and Constitutional Imperative

The argument for inclusion is not solely humanitarian; it is economic and constitutional.

  • Economic Cost: Excluding PwDs from the digital economy limits the potential GDP growth that the IndiaAI Mission seeks to unlock. Accessible technology enables workforce participation for millions who are currently marginalized.13
  • Constitutional Mandate: The Supreme Court of India, in Rajive Raturi v. Union of India (2024), explicitly held that accessibility is a facet of the Fundamental Right to Life (Article 21) and Equality (Article 14).14 The Court mandated that the "State has an obligation to ensure that all steps... are taken" to ensure accessibility in "information, technology and entertainment".16

This report articulates the "Inclusivity Stack"—a layered framework to operationalise these legal and ethical mandates within the technical architecture of the IndiaAI Mission.

2. Theoretical Framework: De-Medicalising Artificial Intelligence

To build an inclusive AI architecture, policy-makers must first interrogate and dismantle the theoretical models of disability that currently inform—often subconsciously—the development of AI systems.

2.1 The Medical Model vs. The Social Model in Code

The development of AI has historically been rooted in the Medical Model of Disability. This model views disability as a "deficit," "pathology," or "aberration" residing within the individual that requires diagnosis, treatment, or cure.17

  • In AI Development: This manifests in data annotation practices where non-normative behaviors (e.g., lack of eye contact in autism, stuttering in speech) are labeled as "errors," "noise," or "negative samples" to be filtered out.11
  • The Consequence: An AI system trained on this model views a disabled user as a "broken" user. A proctoring algorithm flags a neurodivergent student’s movements as "suspicious" 20; a hiring algorithm ranks a candidate with a disability lower because their resume signals a "deviation" from the norm.12

In contrast, the Social Model of Disability, which underpins the UN Convention on the Rights of Persons with Disabilities (UNCRPD), posits that disability is constructed by societal barriers—physical, attitudinal, and digital—that prevent full participation.21

  • In AI Development: Operationalising the Social Model requires shifting the focus from "fixing the user" to "fixing the system." It demands that AI interfaces be designed to accommodate diverse modes of interaction (e.g., supporting screen readers, switch devices, or sign language) as native features, not afterthoughts.19

2.2 Confronting "Technoableism"

The philosopher of technology Ashley Shew defines "Technoableism" as the pervasive belief that technology is the "solution" to disability, often characterizing disabled people as "problems" awaiting a technological "fix".23

  • The Trap of "Inspiration Porn": Technoableism often manifests in high-profile projects—such as AI-powered exoskeletons or brain-computer interfaces—that garner media attention ("Inspiration Porn") while basic digital infrastructure remains inaccessible.24
  • Policy Implication: For the IndiaAI Mission, avoiding technoableism means prioritizing boring but essential infrastructure (e.g., ensuring the CAPTCHA on the PM-Kisan portal is accessible to the blind) over flashy, high-tech "cures" that benefit a few. It means recognizing that disabled people are experts in their own lives and must lead the design process ("Nothing Without Us").23

3. The Legal Layer: From Guidelines to Non-Negotiable Standards

The foundation of the Inclusivity Stack is a robust legal framework that elevates accessibility from a voluntary "best practice" to a mandatory compliance requirement. The legal landscape in India has shifted dramatically in this regard following recent judicial interventions.

3.1 The Rajive Raturi Paradigm Shift (2024)

On November 8, 2024, the Supreme Court of India delivered a landmark judgment in Rajive Raturi v. Union of India.14 The case, originating from a PIL filed in 2005 by visually impaired activist Rajive Raturi, addressed the systemic failure of the state to implement accessibility mandates.

Key Judicial Findings:

  1. Mandatory Rules: The Court accepted the argument presented by the NALSAR Centre for Disability Studies (CDS) that Rule 15 of the RPWD Rules, 2017, which prescribed accessibility standards, had historically been treated as directory (voluntary). The Court ruled that Rule 15, read with Sections 40, 44, and 45 of the RPWD Act, creates a mandatory compliance framework.15
  2. Ultra Vires: The NALSAR report Finding Sizes for All argued that any interpretation of the rules that allows for "self-regulation" or "guidelines" is ultra vires (beyond the powers of) the parent Act, which mandates full accessibility.26
  3. Digital Inclusion: While the case focused on physical access, the judgment explicitly stated that "accessibility to information, technology and entertainment is equally important".16 This extends the mandate to all digital platforms, AI interfaces, and electronic services provided by the state.

Implication for IndiaAI: Any AI system deployed by the government (e.g., BharatGen, DigiYatra) that fails to meet accessibility standards is now illegal and actionable under the RPWD Act.27

3.2 IS 17802: The Constitutional Standard for Code

The technical benchmark for this legal mandate is IS 17802: Accessibility for ICT Products and Services, notified by the Bureau of Indian Standards (BIS) in 2021/2022.28

  • Part 1 (Requirements): Aligned with the global standard EN 301 549 and WCAG 2.1, this section specifies functional performance statements (e.g., "usage without vision," "usage with limited manipulation").29
  • Part 2 (Conformance): Defines the testing methodologies to verify compliance.29
  • Enforceability: Following the RPWD Amendment Rules 2023, IS 17802 is the statutory standard.30 This means that procurement of AI systems via the Government e-Marketplace (GeM) must strictly adhere to these standards.

3.3 Comparative Jurisprudence: The EU and Canada

India’s legal framework can be further strengthened by examining global best practices:

  • Canada (CAN-ASC-6.2:2025): Canada has released the world’s first standard specifically for "Accessible and Equitable Artificial Intelligence Systems".31 It mandates that persons with disabilities be involved in the entire AI lifecycle—from data collection to model training—and introduces the concept of "Equitable AI" to prevent algorithmic discrimination.25
  • European Union (EU AI Act): The EU AI Act (Article 5 & Recital 80) categorises AI systems that exploit vulnerabilities of persons with disabilities as "Unacceptable Risk" (prohibited). High-risk systems (e.g., education, employment) must demonstrate compliance with accessibility requirements by design.33

Recommendation: The IndiaAI Mission should adopt a framework analogous to CAN-ASC-6.2, mandating "lifecycle inclusion" for all projects funded under the Safe & Trusted AI pillar.

4. The Data Layer: Constructing the Disability Data Commons

Artificial Intelligence is, at its core, an engine of pattern recognition. If the "pattern" of disability is absent from the training data, the AI will inevitably treat disability as an anomaly. The AIKosh pillar of the IndiaAI Mission 2 must address this "data void" to ensure sovereign AI is truly inclusive.

4.1 The Representation Gap in Indic Datasets

Current datasets for Indian languages (e.g., those used to train BharatGen) suffer from a dual exclusion:

  1. General Data Poverty: While initiatives like Bhashini are addressing the lack of Indic language data, there is a severe scarcity of data representing disabled speakers of these languages.8
  2. Specific Modality Gaps:
  • Dysarthric Speech: There are few, if any, large-scale datasets of dysarthric or atypical speech in languages like Hindi, Tamil, or Bengali. This renders voice-activated UPI payments or government helplines inaccessible to millions with motor or speech impairments.35
  • Indian Sign Language (ISL): Despite being a scheduled language capability under the New Education Policy, ISL lacks a comprehensive, annotated video-to-text corpus required to build robust translation models.36

4.2 The "Outlier Advantage": Robustness via Inclusion

A compelling technical argument for inclusion is the concept of the "Outlier Advantage." Machine Learning (ML) research indicates that training models on "edge cases" or diverse outliers improves the mathematical robustness and generalisation capabilities of the model for all users.37

  • Curriculum Learning: By including "difficult" samples—such as stuttered speech or heavily accented voice commands—during training, the model learns to identify the phonetic core of language rather than over-fitting to superficial acoustic features.39
  • Universal Benefit: A speech model trained on dysarthric speech performs significantly better in noisy environments (e.g., a railway station) for non-disabled users. Thus, investing in disability data is an investment in the overall quality of India’s sovereign AI.40

4.3 Governance: Data Empowerment and Protection Architecture (DEPA)

To collect this sensitive data without exploitation, India must leverage its Data Empowerment and Protection Architecture (DEPA).41

  • Disability Data Trusts: We propose the creation of "Disability Data Commons"—fiduciary structures where the disability community pools their data (e.g., voice samples, gait patterns).
  • Consent Managers: Using DEPA’s electronic consent artifact, PwDs can grant temporary, purpose-limited access to their data for training "public good" models (like BharatGen) while retaining ownership.43 This shifts the dynamic from "data extraction" to "data empowerment."

5. The Model Layer: Indigenous Intelligence and Red Teaming

The IndiaAI Compute Pillar and BharatGen initiative provide the computational muscle to build indigenous foundational models.4 This sovereign control offers a unique opportunity to "bake in" inclusion at the model layer, rather than retrofitting it later.

5.1 BharatGen and the Constitutional AI Paradigm

BharatGen, India’s proposed sovereign Large Multimodal Model, is currently being trained on datasets spanning 22 Indian languages.5 To avoid the pitfalls of Western models, BharatGen must adopt a Constitutional AI approach.

  • Constitution as the Objective Function: The model’s reward function (in Reinforcement Learning from Human Feedback - RLHF) should be aligned with the constitutional values of Article 14 (Equality) and Article 21 (Dignity).
  • Anti-Ableist Fine-Tuning: The model must be penalised for generating "inspiration porn," "medical model" diagnoses for social queries, or ableist stereotypes. It should be rewarded for providing accessible, empowering, and rights-based responses.12

5.2 Accessibility Red Teaming

The Safe & Trusted AI pillar 7 must institutionalize Accessibility Red Teaming—a structured adversarial testing process focused on disability bias.45

  • Methodology: Unlike security red teaming (which tests for hacks), accessibility red teaming tests for Allocative Harms (denial of resources) and Quality of Service Harms (degraded performance).46
  • The Red Team: This requires recruiting "white-hat" testers with disabilities—blind screen-reader users, autistic testers, deaf signers—to identify failure modes that able-bodied developers cannot perceive.47
  • NIST Alignment: The IndiaAI Safety Institute (AISI) should align its red teaming protocols with the NIST AI Risk Management Framework (RMF), which explicitly identifies "bias and discrimination" as top-tier risks.48

5.3 Case Study: The Bhashini Gap

Bhashini, the National Language Translation Mission, is a flagship success, offering text-to-text translation in 22 languages.36 However, it currently treats Indian Sign Language (ISL) as an outlier.

  • The "23rd Language": ISL is a distinct natural language with its own grammar (Subject-Object-Verb), distinct from spoken Hindi or English.
  • The Inclusivity Stack Requirement: The Bhashini mandate must be expanded to treat ISL as the "23rd language." This requires funding for specific transformer architectures capable of processing 3D spatial grammar (video-to-text and text-to-avatar), moving beyond simple gesture recognition.36

6. The Governance Layer: Operationalising Justice

Technology is deployed within a bureaucratic structure. The "Governance Layer" ensures that the technical capabilities of the Inclusivity Stack are enforced through administrative and financial levers.

6.1 Public Procurement as a Policy Lever (GeM)

The Government of India is the largest purchaser of technology in the country. The Government e-Marketplace (GeM) is the primary funnel for this procurement.51

  • Mandatory Accessibility Check: GeM must integrate a mandatory "IS 17802 Compliance" field for all AI and software tenders. Vendors should be required to upload a Voluntary Product Accessibility Template (VPAT) or a certificate from the Standardisation Testing and Quality Certification (STQC) directorate.52
  • Market Shaping: By disqualifying inaccessible products from government tenders, the state creates a powerful market incentive for private vendors to adopt "Universal Design" principles.

6.2 Disability Impact Assessments (DIA)

For high-stakes AI deployments (e.g., policing, welfare distribution, healthcare), the nodal agency must conduct a Disability Impact Assessment (DIA) prior to deployment.8

  • Framework: A DIA evaluates:
  1. Exclusion Risk: Does the system (e.g., DigiYatra) exclude specific disability phenotypes (e.g., facial paralysis)?
  2. Disparate Impact: Is the error rate higher for PwDs than for the general population?
  3. Accommodation Pathways: Is there a non-digital, human-in-the-loop alternative available?
  • Accountability: The results of the DIA should be public, and high-risk findings should trigger a mandatory pause in deployment until mitigations are in place.54

6.3 Institutional Accountability: CCPD and CAG

  • Chief Commissioner for Persons with Disabilities (CCPD): The CCPD should establish a specialized "Digital Rights Wing" equipped with technical experts to adjudicate complaints regarding digital accessibility and AI discrimination.30
  • Comptroller and Auditor General (CAG): As the CAG moves towards auditing AI systems 9, it must include specific "inclusivity audit" parameters. An AI system that is inaccessible is an inefficient use of public funds and should be flagged in CAG reports.

7. Case Studies in Exclusion and Remediation

7.1 DigiYatra and Biometric Exclusion

The Problem: DigiYatra uses Facial Recognition Technology (FRT) for airport entry. While efficient for the majority, it poses severe exclusion risks for PwDs.

  • Biometric Failure: Individuals with cerebral palsy (head tremors), facial disfigurements, or Down syndrome often experience higher "False Rejection Rates" in FRT systems.9
  • Physical Barriers: The automated gates often close too quickly for wheelchair users or those with slow gaits, causing physical anxiety or harm.55

The Inclusivity Stack Solution:

  1. Data: Retrain the FRT models using a "Disability Data Trust" dataset to improve recognition of diverse faces (The Outlier Advantage).
  2. Process: Mandate a permanent, staffed "Accessibility Lane" that does not require biometric authentication. This lane should not be a "penalty box" (slower) but a "premium service" (faster) to ensure dignity.56

7.2 PM-Kisan and Algorithmic Gatekeeping

The Problem: Welfare schemes like PM-Kisan rely on Aadhaar-seeded databases and AI-driven fraud detection to disperse funds.57

  • Exclusion: AI systems may flag "suspicious" patterns—such as a mismatch in biometrics due to manual labor or disability—leading to the automated suspension of benefits ("Digital Death").
  • Lack of Recourse: The grievance redressal mechanisms are often digital-first (chatbots), which may themselves be inaccessible to the blind or illiterate.

The Inclusivity Stack Solution:

  1. Human-in-the-Loop: Any AI decision to suspend benefits must be automatically escalated to a human review officer.
  2. Accessible Redressal: A "Click-to-Call" feature or a dedicated, accessible web portal compliant with IS 17802 must be available for beneficiaries to challenge algorithmic decisions.25

8. Conclusion: The Road to a Viksit Bharat

India’s aspiration to become a Viksit Bharat (Developed Nation) by 2047 rests on its ability to harness the full potential of its human capital. Leaving 2.21% of the population (officially) or closer to 15% (globally estimated) behind in a "digital apartheid" is not just a violation of human rights; it is a strategic error that undermines the nation’s economic and social cohesion.

The Inclusivity Stack proposed in this report is not an optional add-on; it is the structural steel required to support the weight of a billion aspirations. By operationalising the legal mandates of Rajive Raturi, leveraging the "Outlier Advantage" in data, and enforcing accountability through governance, India can demonstrate that its "Sovereign AI" is truly sovereign—because it serves everyone.

As India builds the digital highways of the 21st century, it must ensure they have ramps. The cost of exclusion is high, but the return on inclusion—a resilient, robust, and just digital republic—is immeasurable.

Table 1: The Inclusivity Stack – Summary of Recommendations

Layer

Current State (The Problem)

The Inclusivity Stack (The Solution)

Key Lever / Standard

Legal

Voluntary guidelines; "Soft Law" approach.

Mandatory Compliance; Non-negotiable standards.

Rajive Raturi Judgment; IS 17802; RPWD Act S.40.

Data

Data Voids; Medical Model annotation; Exclusion of outliers.

Disability Data Commons; Social Model annotation; Outlier Advantage.

AIKosh; DEPA; Data Trusts.

Model

Bias; Hallucinations; "Inspiration Porn"; Ignored edge cases.

Constitutional AI; Accessibility Red Teaming; Anti-ableist RLHF.

BharatGen; NIST RMF; AISI.

Interface

Inaccessible CAPTCHAs; Lack of ISL; Voice-only or Text-only silos.

Universal Design; Multi-modal access (ISL, text, voice, switch).

Bhashini (ISL Mission); CAN-ASC-6.2.

Governance

Self-regulation; Lack of audits; Technoableism.

Disability Impact Assessments (DIA); Third-party Audits; Procurement mandates.

GeM; CCPD; CAG Audits.

References & Citation Key

  • Legal: Rajive Raturi v. Union of India (2024) 14; RPWD Act 2016 27; IS 17802.28
  • Policy: IndiaAI Mission 1; NITI Aayog AI Strategy 7; EU AI Act 33; CAN-ASC-6.2.25
  • Theory: Technoableism (Ashley Shew) 23; Social vs. Medical Model 18; Algorithmic Harms.46
  • Technical: Red Teaming 45; Bias in datasets 8; Bhashini 36; Outlier Advantage.37
  • Governance: GeM Procurement 51; DEPA & Data Trusts.41

Works cited

  1. Cabinet Approves Over Rs 10300 Crore for IndiaAI Mission, will Empower AI Startups and Expand Compute Infrastructure Access - PIB, accessed on February 14, 2026, https://www.pib.gov.in/PressReleasePage.aspx?PRID=2012375
  2. Transforming India with AI - PIB, accessed on February 14, 2026, https://www.pib.gov.in/PressReleasePage.aspx?PRID=2178092
  3. Transforming India with AI: Rs 10,300 crore mission, 38,000 GPUs & a vision for inclusive growth | DD News, accessed on February 14, 2026, https://ddnews.gov.in/en/transforming-india-with-ai-rs-10300-crore-mission-38000-gpus-a-vision-for-inclusive-growth/
  4. parliament question: role of bharatgen ai - Press Release: Press Information Bureau, accessed on February 14, 2026, https://www.pib.gov.in/PressReleseDetailm.aspx?PRID=2223738®=3&lang=1
  5. BharatGen: India's First Sovereign AI Initiative, accessed on February 14, 2026, https://bharatgen.com/
  6. Union budget 2024-25 allocates over 550 crores to the IndiaAI mission, accessed on February 14, 2026, https://indiaai.gov.in/article/union-budget-2024-25-allocates-over-550-crores-to-the-indiaai-mission
  7. India AI Governance Guidelines - AWS, accessed on February 14, 2026, https://indiaai.s3.ap-south-1.amazonaws.com/docs/guidelines-governance.pdf
  8. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review - Frontiers, accessed on February 14, 2026, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1349668/full
  9. Auditing AI: What is it and why does it matter for India?, accessed on February 14, 2026, https://www.orfonline.org/expert-speak/auditing-ai-what-is-it-and-why-does-it-matter-for-india
  10. Balancing convenience and data privacy in the Digi Yatra app, accessed on February 14, 2026, https://papers.ssrn.com/sol3/Delivery.cfm/5150113.pdf?abstractid=5150113&mirid=1
  11. ABLEist: Intersectional Disability Bias in LLM-Generated Hiring Scenarios - arXiv, accessed on February 14, 2026, https://arxiv.org/html/2510.10998v1
  12. Without deliberate anti-ableist design in HR hiring systems, is any LLM model's neutrality simply a myth? - Gareth Ford Williams, accessed on February 14, 2026, https://garethfordwilliams.medium.com/without-deliberate-anti-ableist-design-in-hr-hiring-systems-is-any-llm-models-neutrality-simply-d7cc134e8238
  13. The Intersection of Technology, Disability Rights and Worker Rights, accessed on February 14, 2026, https://www.nationaldisabilityinstitute.org/wp-content/uploads/2025/01/intersectionoftechnologydisabilityandworkerrights2024report.pdf
  14. Case Report: Rajive Raturi v. Union of India (2024) [LiveLaw (SC) 875], accessed on February 14, 2026, https://kshetryandassociates.com/case-report-rajive-raturi-v-union-of-india-2024-livelaw-sc-875/
  15. IN THE SUPREME COURT OF INDIA CIVIL ORIGINAL JURISDICTION Writ Petition (C) No. 243 of 2005 Rajive Raturi …Petitioner Vers, accessed on February 14, 2026, https://api.sci.gov.in/supremecourt/2005/9321/9321_2005_1_1503_56986_Judgement_08-Nov-2024.pdf
  16. Important Judgements for the Persons with disabilities | NIEPVD Dehradun | India, accessed on February 14, 2026, https://niepvd.nic.in/important-judgements-for-the-persons-with-disabilities/
  17. Disability-First AI Dataset Annotation: Co-designing Stuttered Speech Annotation Guidelines with People Who Stutter - arXiv, accessed on February 14, 2026, https://arxiv.org/html/2602.10403v1
  18. Medical and Social Models of Disability | Office of Developmental Primary Care, accessed on February 14, 2026, https://odpc.ucsf.edu/clinical/patient-centered-care/medical-and-social-models-of-disability
  19. Identifying Disability Insensitive Language in Scholarly Works using Machine Learning - IslandScholar, accessed on February 14, 2026, https://islandscholar.ca/sites/default/files/2025-10/robyroshna_honours_thesis_2025.pdf
  20. Full article: Disabling AI: power, exclusion, and disability - Taylor & Francis, accessed on February 14, 2026, https://www.tandfonline.com/doi/full/10.1080/01425692.2025.2519482
  21. Technology and Disability: Trends and Opportunities in the Digital Economy in ASEAN, accessed on February 14, 2026, https://www.eria.org/uploads/Technology-and-Disability-Trends-and-Opportunities-in-the-Digital-Economy-in-ASEAN.pdf
  22. Social Model vs Medical Model of disability - disabilitynottinghamshire.org.uk, accessed on February 14, 2026, https://www.disabilitynottinghamshire.org.uk/index.php/about/social-model-vs-medical-model-of-disability/
  23. Ashley Shew - Against Technoableist AI - YouTube, accessed on February 14, 2026, https://www.youtube.com/watch?v=j7JcRwNWETM
  24. Against Technoableism | Rethinking Who Needs Improvement | College of Liberal Arts and Human Sciences | Virginia Tech, accessed on February 14, 2026, https://liberalarts.vt.edu/news/bookshelf/science-technology-and-society-bookshelf/2023/liberalarts-against-technoableism.html
  25. Summary of CAN-ASC-6.2:2025 – Accessible and Equitable Artificial Intelligence Systems, accessed on February 14, 2026, https://accessible.canada.ca/creating-accessibility-standards/overview-asc-62-accessible-equitable-artificial-intelligence-systems
  26. Finding Sizes For All - Report On The Status of The Right To Accessibility in India - Scribd, accessed on February 14, 2026, https://www.scribd.com/document/749742948/Finding-Sizes-for-All-Report-on-the-Status-of-the-Right-to-Accessibility-in-India
  27. Case Laws that are Shaping Digital Accessibility in India - BarrierBreak, accessed on February 14, 2026, https://www.barrierbreak.com/case-laws-that-are-shaping-digital-accessibility-in-india/
  28. India's Digital Accessibility Laws and Overview • DigitalA11Y, accessed on February 14, 2026, https://www.digitala11y.com/indias-digital-accessibility-laws-and-overview/
  29. IS 17802 (Part 2) : 2022 - Broadband India Forum, accessed on February 14, 2026, https://broadbandindiaforum.in/wp-content/uploads/2022/08/IS-17802_2_2022.pdf
  30. RPWD Act and IS 17802: India's Digital Accessibility Standards (2025 Guide), accessed on February 14, 2026, https://www.pivotalaccessibility.com/2025/06/rpwd-act-and-is-17802-indias-digital-accessibility-standards-2025-guide/
  31. CAN-ASC-6.2:2025- Accessible and Equitable Artificial Intelligence ..., accessed on February 14, 2026, https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems
  32. How to Implement CAN-ASC-6.2:2025 Accessibility Requirements for AI Systems?, accessed on February 14, 2026, https://www.barrierbreak.com/how-to-implement-can-asc-6-22025-accessibility-requirements-for-ai-systems/
  33. A disability-inclusive Artificial Intelligence Act: : a guide to monitor ..., accessed on February 14, 2026, https://www.edf-feph.org/content/uploads/2024/10/AI-Act-implementation-toolkit-Final.pdf
  34. EU AI Act - Updates, Compliance, Training, accessed on February 14, 2026, https://www.artificial-intelligence-act.com/
  35. (PDF) Artificial Intelligence for Accessibility: A Comprehensive Systematic Review and Impact Framework for Assistive Technologies - ResearchGate, accessed on February 14, 2026, https://www.researchgate.net/publication/396241449_Artificial_Intelligence_for_Accessibility_A_Comprehensive_Systematic_Review_and_Impact_Framework_for_Assistive_Technologies
  36. Bhashini AI - Making Languages More Accessible with Digital Technology - Unicef, accessed on February 14, 2026, https://www.unicef.org/digitalimpact/bhashini-ai-making-languages-more-accessible-digital-technology
  37. AI Data-Driven Personalisation and Disability Inclusion - ResearchGate, accessed on February 14, 2026, https://www.researchgate.net/publication/348569682_AI_Data-Driven_Personalisation_and_Disability_Inclusion
  38. AI Fairness for People with Disabilities: Point of View - arXiv, accessed on February 14, 2026, https://arxiv.org/pdf/1811.10670
  39. 2024 Summer Research Grant Awardees | Villanova University, accessed on February 14, 2026, https://www.villanova.edu/villanova/provost/research/institute-research-scholarship/find_support_need/internal_funding/summer-grant/2024-Recipients.html
  40. (PDF) Tamavaq™: A Hybrid Quantum–Classical Grover Pipeline for Precision Neoantigen Vaccination in Glioma - ResearchGate, accessed on February 14, 2026, https://www.researchgate.net/publication/397449493_Tamavaq_A_Hybrid_Quantum-Classical_Grover_Pipeline_for_Precision_Neoantigen_Vaccination_in_Glioma
  41. AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding, accessed on February 14, 2026, https://www.csohate.org/2026/02/11/ai-impact-summit-2026/
  42. Rebooting consent in the digital age: a governance framework for health data exchange, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC8728384/
  43. The design of a data governance system - SUERF - The European Money and Finance Forum, accessed on February 14, 2026, https://www.suerf.org/publications/suerf-policy-notes-and-briefs/the-design-of-a-data-governance-system/
  44. What Is a Data Trust? - Centre for International Governance Innovation, accessed on February 14, 2026, https://www.cigionline.org/articles/what-data-trust/
  45. Red teaming ChatGPT in medicine to yield real-world insights on model behavior - PMC, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC11889229/
  46. Toward a Taxonomy of Algorithmic Harms for ... - AAAI Publications, accessed on February 14, 2026, https://ojs.aaai.org/index.php/AIES/article/download/36745/38883/40820
  47. Guide to Red Teaming Methodology on AI Safety (Version 1.10), accessed on February 14, 2026, https://aisi.go.jp/assets/pdf/E1_ai_safety_RT_v1.10_en.pdf
  48. Supporting NIST's Development of Guidelines on Red- teaming for Generative AI - Carnegie Mellon University, accessed on February 14, 2026, https://www.cmu.edu/sites/default/files/cmu-block-center-site-files/2025-07/supporting-nists-development-of-guidelines-on-red-teaming-for-generative-ai-2024.pdf
  49. NIST releases its Generative Artificial Intelligence Profile: Key points | DLA Piper, accessed on February 14, 2026, https://www.dlapiper.com/en/insights/publications/ai-outlook/2024/nist-releases-its-generative-artificial-intelligence-profile
  50. Bhashini Logo, accessed on February 14, 2026, https://bhashini.gov.in/
  51. Harnessing AI and digital public infrastructure (DPI) for Viksit Bharat | EY, accessed on February 14, 2026, https://www.ey.com/content/dam/ey-unified-site/ey-com/en-in/insights/ai/documents/ey-harnessing-ai-and-digital-public-infrastructure-for-viksit-bharat.pdf
  52. The Central Government to leverage AI in GeM procurement: Union Minister Piyush Goyal, accessed on February 14, 2026, https://indiaai.gov.in/article/the-central-government-to-leverage-ai-in-gem-procurement-union-minister-piyush-goyal
  53. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review - PMC, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC10905618/
  54. Impact Assessments: - Supporting AI Accountability & Trust - Workday Blog, accessed on February 14, 2026, https://blog.workday.com/content/dam/web/en-us/documents/legal/access-partnership-workday-impact-assessment-paper.pdf
  55. Adoption of Digital Identity in Airline Transit: A Global Overview | Kairos Blog, accessed on February 14, 2026, https://www.kairos.com/post/adoption-of-digital-identity-in-airline-transit-a-global-overview
  56. Digi yatra policy doc - Ministry of Civil Aviation, accessed on February 14, 2026, https://www.civilaviation.gov.in/sites/default/files/migration/Digi%20yatra%20policy%20doc.pdf
  57. GOVERNING AI IN WELFARE DELIVERY - Efficiency, Exclusion, and Constitutional Accountability PARNEET KAUR - SSRN, accessed on February 14, 2026, https://papers.ssrn.com/sol3/Delivery.cfm/6080208.pdf?abstractid=6080208&mirid=1
  58. Why Governments Need Unified Social Registry for Beneficiary Targeting - CSM Technologies, accessed on February 14, 2026, https://www.csm.tech/blog-details/blog_pdf/why-governments-need-unified-social-registry-for-beneficiary-targeting
  59. Supreme Court Mandates Barrier-Free Public Spaces. A Landmark Judgment Ensuring Equal Access to Public Spaces for Persons with Disabilities (PWDs) - Lawtext, accessed on February 14, 2026, https://lawtext.in/judgement.php?bid=1158
  60. Recital 80 | EU Artificial Intelligence Act, accessed on February 14, 2026, https://artificialintelligenceact.eu/recital/80/
  61. Social and medical models of disability and mental health: evolution and renewal - PMC, accessed on February 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC6312522/
  62. LLM Red Teaming: The Complete Step-By-Step Guide To LLM Safety - Confident AI, accessed on February 14, 2026, https://www.confident-ai.com/blog/red-teaming-llms-a-step-by-step-guide
  63. Samudaye - Bhashini, accessed on February 14, 2026, https://bhashini.gov.in/samudaye/anusandhan-mitra/6

Saturday, 31 January 2026

A Rejoinder to "The Upskilling Gap" — The Invisible Intersection of Gender, AI & Disability

 To:

Ms. Shravani Prakash, Ms. Tanu M. Goyal, and Ms. Chellsea Lauhka
c/o The Hindu, Chennai / Delhi, India

Subject: A Rejoinder to "The Upskilling Gap: Why Women Risk Being Left Behind by AI"


Dear Authors,

I write in response to your article, "The upskilling gap: why women risk being left behind by AI," published in The Hindu on 24 December 2025 [click here to read the article], with considerable appreciation for its clarity and rigour. Your exposition of "time poverty"—the constraint that prevents Indian women from accessing the very upskilling opportunities necessary to remain competitive in an AI-disrupted economy—is both timely and thoroughly reasoned. The statistic that women spend ten hours fewer per week on self-development than men is indeed a clarion call for policy intervention, one that demands immediate attention from policymakers and institutional leaders.

Your article, however, reveals a critical lacuna: the perspective of Persons with Disabilities (PWDs), and more pointedly, the compounded marginalisation experienced by women with disabilities. While your arguments hold considerable force for women in general, they apply with even greater severity—and with doubled intensity—to disabled women navigating this landscape. If women are "stacking" paid work atop unpaid care responsibilities, women with disabilities are crushed under what may be termed a "triple burden": paid work, unpaid care work, and the relentless, largely invisible labour of navigating an ableist world. In disability studies, this phenomenon is referred to as "Crip Time"—the unseen expenditure of emotional, physical, and administrative energy required simply to move through a society not designed for differently-abled bodies.

1. The "Time Tax" and Crip Time: A Compounded Deficit

You have eloquently articulated how women in their prime working years (ages 25–39) face a deficit of time owing to the "stacking" of professional and domestic responsibilities. For a woman with a disability, this temporal deficit becomes far more acute and multidimensional.

Consider the following invisible labour burdens:

Administrative and Bureaucratic Labour. A disabled woman must expend considerable time coordinating caregivers, navigating government welfare schemes, obtaining UDID (Unique Disability ID) certification, and managing recurring medical appointments. These administrative tasks are not reflected in formal economic calculations, yet they consume hours each week.

Navigation Labour. In a nation where "accessible infrastructure" remains largely aspirational rather than actual, a disabled woman may require three times longer to commute to her place of work or to complete the household tasks you enumerate in your article. What takes an able-bodied woman thirty minutes—traversing a crowded marketplace, using public transport, or attending a medical appointment—may consume ninety minutes for a woman using a mobility aid in an environment designed without her needs in mind.

Emotional Labour. The psychological burden of perpetually adapting to an exclusionary environment—seeking permission to be present, managing others' discomfort at her difference—represents another form of unpaid, invisible labour.

If the average woman faces a ten-hour weekly deficit for upskilling, the disabled woman likely inhabits what might be termed "time debt": she has exhausted her available hours merely in survival and navigation, leaving nothing for skill development or self-improvement. She is not merely "time poor"; she exists in a state of temporal deficit.

2. The Trap of Technoableism: When Technology Becomes the Problem

Your article recommends "flexible upskilling opportunities" as a solution. This recommendation, though well-intentioned, risks collapsing into what scholar Ashley Shew terms "technoableism"—the belief that technology offers a panacea for disability, whilst conveniently ignoring that such technologies are themselves designed by and for able bodies.

The Inaccessibility of "Flexible" Learning. Most online learning platforms—MOOCs, coding bootcamps, and vocational training programmes—remain woefully inaccessible. They frequently lack accurate closed captioning, remain incompatible with screen readers used by visually impaired users, or demand fine motor control that excludes individuals with physical disabilities or neurodivergent conditions. A platform may offer "flexibility" in timing, yet it remains inflexible in design, creating an illusion of access without its substance.

The Burden of Adaptation Falls on the Disabled Person. Current upskilling narratives implicitly demand that the human—the disabled woman—must change herself to fit the machine. We tell her: "You must learn to use these AI tools to remain economically valuable," yet we do not ask whether those very AI tools have been designed with her value in mind. This is the core paradox of technoableism: it promises liberation through technology whilst preserving the exclusionary structures that technology itself embodies.

3. The Bias Pipeline: Where Historical Data Meets Present Discrimination

Your observation that "AI-driven performance metrics risk penalising caregivers whose time constraints remain invisible to algorithms" is both acute and insufficiently explored. Let us examine this with greater precision.

The Hiring Algorithm and the "Employment Gap." Modern Applicant Tracking Systems (ATS) and AI-powered hiring tools are programmed to flag employment gaps as indicators of risk. Consider how these gaps are interpreted differently:

  • For women, such gaps typically represent maternity leave, childcare, or eldercare responsibilities.

  • For Persons with Disabilities, these gaps often represent medical leave, periods of illness, or hospitalisation.

  • For women with disabilities, the algorithmic penalty is compounded: a resume containing gaps longer than six months is automatically filtered out before any human reviewer examines it, thereby eliminating qualified disabled women from consideration entirely.

Research audits have documented this discrimination. In one verified case, hiring algorithms flagged minority candidates disproportionately as needing human review because such candidates—inhibited by systemic bias in how they were evaluated—tended to give shorter responses during video interviews, which the algorithm interpreted as "low engagement".​

Video Interviewing Software and Facial Analysis. Until its removal in January 2021, the video interviewing platform HireVue employed facial analysis to assess candidates' suitability—evaluating eye contact, facial expressions, and speech patterns as proxies for "employability" and honesty. This system exemplified technoableism in its purest form:

  • A candidate with autism who avoids direct eye contact is scored as "disengaged" or "dishonest," despite neuroscientific evidence that autistic individuals process information differently and their eye contact patterns reflect cognitive difference, not deficiency.

  • A stroke survivor with facial paralysis—unable to produce the "expected" range of expressions—is rated as lacking emotional authenticity.

  • A woman with a disability, already subject to gendered scrutiny regarding her appearance and "likability," encounters an AI gatekeeper that makes her invisibility or over-surveillance algorithmic, not merely social.

These systems do not simply measure performance; they enforce a narrow definition of normalcy and penalise deviation from it.

4. Verified Examples: The "Double Glitch" in Action

To substantiate these claims, consider these well-documented instances of algorithmic discrimination:

Speech Recognition and Dysarthria. Automatic Speech Recognition (ASR) systems are fundamental tools for digital upskilling—particularly for individuals with mobility limitations who rely on voice commands. Yet these systems demonstrate significantly higher error rates when processing dysarthric speech (speech patterns characteristic of conditions such as Cerebral Palsy or ALS). Recent research quantifies this disparity:

  • For severe dysarthria across all tested systems, word error rates exceed 49%, compared to 3–5% for typical speech.​

  • Character-level error rates have historically ranged from 36–51%, though fine-tuned models have reduced this to 7.3%.​

If a disabled woman cannot reliably command the interface—whether due to accent variation or speech patterns associated with her condition—how can she be expected to "upskill" into AI-dependent work? The platform itself becomes a barrier.

Facial Recognition and the Intersection of Race and Gender. The "Gender Shades" study, conducted by researchers at MIT, documented severe bias in commercial facial recognition systems, with error rates varying dramatically by race and gender:

  • Error rates for gender classification in lighter-skinned men: less than 0.8%

  • Error rates for gender classification in darker-skinned women: 20.8% to 34.7%​

Amazon Rekognition similarly misclassified 31 percent of darker-skinned women. For a disabled woman of colour seeking employment or accessing digital services, facial recognition systems compound her marginalisation: she is simultaneously rendered invisible (failed detection) or hyper-surveilled (flagged as suspicious).​

The Absence of Disability-Disaggregated Data. Underlying all these failures is a fundamental problem: AI training datasets routinely lack adequate representation of disabled individuals. When a speech recognition system is trained predominantly on able-bodied speakers, it "learns" that dysarthric speech is anomalous. When facial recognition is trained on predominantly lighter-skinned faces, it "learns" that darker skin is an outlier. Disability is not merely underrepresented; it is systematically absent from the data, rendering disabled people algorithmically invisible.

5. Toward Inclusive Policy: Dismantling the Bias Pipeline

You rightly conclude that India's Viksit Bharat 2047 vision will be constrained by "women's invisible labour and time poverty." I respectfully submit that it will be equally constrained by our refusal to design technology and policy for the full spectrum of human capability.

True empowerment cannot mean simply "adding jobs," as your article notes. Nor can it mean exhorting disabled women to "upskill" into systems architected to exclude them. Rather, it requires three concrete interventions:

First, Inclusive Data Collection. Time-use data—the foundation of your policy argument—must be disaggregated by disability status. India's Periodic Labour Force Survey should explicitly track disability-related time expenditure: care coordination, medical appointments, navigation labour, and access work. Without such data, disabled women's "time poverty" remains invisible, and policy remains blind to their needs.

Second, Accessibility by Design, Not Retrofit. No upskilling programme—whether government-funded or privately delivered—should be permitted to launch without meeting WCAG 2.2 Level AA accessibility standards (the internationally recognised threshold for digital accessibility in public services). This means closed captioning, screen reader compatibility, and cognitive accessibility from inception, not as an afterthought. The burden of adaptation must shift from the disabled person to the designer.​

Third, Mandatory Algorithmic Audits for Intersectional Bias. Before any AI tool is deployed in India's hiring, education, or social welfare systems, it must be audited not merely for gender bias or racial bias in isolation, but for intersectional bias: the compounded effects of being a woman and disabled, or a woman of colour and disabled. Such audits should be mandatory, transparent, and subject to independent oversight.

Conclusion: A Truly Viksit Bharat

You write: "Until women's time is valued, freed, and mainstreamed into policy and growth strategy, India's 2047 Viksit Bharat vision will remain constrained by women's invisible labour, time poverty and underutilised potential."

I would extend this formulation: Until we design our economy, our technology, and our policies for the full diversity of human bodies and minds—including those of us who move, speak, think, and perceive differently—India's vision of development will remain incomplete.

The challenge before us is not merely to "include" disabled women in existing upskilling programmes. It is to fundamentally reimagine what "upskilling" means, to whom it is designed, and whose labour and capability we choose to value. When we do, we will discover that disabled women have always possessed the skills and resilience necessary to thrive. Our task is simply to remove the barriers we have constructed.

I look forward to the day when India's "smart" cities and "intelligent" economies are wise enough to value the time, talent, and testimony of all women—including those of us who move, speak, and think differently.

Yours faithfully,

Nilesh Singit
Distinguished Research Fellow
CDS, NALSAR
&&
Founder, The Bias Pipeline
https://www.nileshsingit.org/

Monday, 5 January 2026

How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know

Abstract
In the expanding realm of artificial intelligence (AI), systems that appear neutral may in fact reproduce and amplify bias, with significant consequences for persons with disabilities. This article examines how algorithmic bias interacts with accessibility: for example, by misrecognising disabled bodies or communication styles, excluding assistive technology users, or embedding inaccessible design decisions in automated tools. Using the UNCRPD’s rights-based framework and the EU AI Act’s regulatory model, the piece advances a critical perspective on how disabled people in India and elsewhere must remain vigilant to algorithmic harms and insist on inclusive oversight. Within India’s evolving digital-governance and disability-rights context, accessible AI systems are not optional but a matter of legal and ethical obligation. The essay concludes by offering practical recommendations — including disability-inclusive data practices, human rights impact assessments, transparency and participation of disabled persons’ organisations — for policymakers, designers and civil-society actors. The objective is to ensure that AI becomes a facilitator of accessibility rather than a barrier.
Introduction
Artificial intelligence systems are increasingly embedded in everyday services: from recruitment platforms and credit-scoring tools, to facial-recognition, speech-to-text and navigation aids. At first glance, these systems promise increased efficiency and even accessibility gains. Yet beneath the veneer of “smart automation” lies a persistent problem: algorithmic bias. Such bias refers to systematic and repeatable errors in an AI system that create unfair outcomes for individuals or groups — often those already marginalised. In the context of disability, algorithmic bias can shape accessibility in profound ways: by excluding persons with certain disabilities, misreading assistive communication modes, or embedding stereotyped assumptions into system design.
For disabled persons, accessibility is not a mere convenience; it is a right. Under the UNCRPD, States Parties “shall ensure that persons with disabilities can access information and communications technologies … on an equal basis with others”. ([OHCHR]) Consequently, when AI systems fail to respect accessibility, the failure is both practical and rights-based. Meanwhile, the EU AI Act introduces a regulatory architecture attuned to algorithmic risk and non-discrimination, explicitly covering disability and accessibility considerations. ([Artificial Intelligence Act] This article explores how algorithmic bias shapes accessibility, draws upon rights and regulation frameworks, and reflects on what disabled people (and those engaged in disability rights) ought to know — with special reference to the Indian context.
Understanding Algorithmic Bias and Accessibility
What is algorithmic bias?
Algorithmic bias occurs when an AI system, whether through its data, its model, its deployment context, or its user interface, produces outcomes that are systematically less favourable for certain groups. These groups — by virtue of protected characteristics such as disability — may face unfair exclusion or adverse treatment. In the European context, the European Union Agency for Fundamental Rights (FRA) has noted that “speech-algorithms include strong bias against people … disability …”. ([FRA]) Bias may arise at different stages: data collection (under-representation of disabled persons), model training (failure to include assistive-technology use cases), deployment (system inaccessible for screen-reader users), or continuous feedback (lack of monitoring for disabled-user outcomes). Importantly, bias is not always obvious: it may manifest as “fair on average” but unfair for particular groups.
The accessibility dimension
Accessibility means that persons with disabilities can access, use and benefit from goods, services, environments and information “on an equal basis with others”. Under the UNCRPD (Article 9), States are obliged to take appropriate measures to ensure accessibility of information and communications technologies. ([United Nations Documentation] AI systems that serve as mediators of access — for example, voice-interfaces, image-recognition apps, assistive navigation systems — must therefore be designed to respect accessibility. Yet when algorithmic bias creeps in, accessibility is undermined. Consider a recruitment AI that misinterprets alternative communication modes used by a candidate with cerebral palsy, or a smartphone-app navigation tool that fails to account for a wheelchair user’s needs due to biased training data. The result is exclusion or disadvantage, despite the hosting system being marketed as inclusive.
Intersection of bias and accessibility
Disabled persons may face compounded disadvantage: algorithmic bias interacting with inaccessible design means that even when an AI system is technically available, the outcome may not be equitable. For example, an AI-driven health-screening tool may be calibrated on data from non-disabled populations, thereby misdiagnosing persons with disabilities or failing to accommodate their patterns. The OECD notes that AI systems may discriminate against individuals with “facial differences, gestures … speech impairment” and other disability characteristics. ([OECD AI Policy Observatory] Thus, accessibility is not simply about enabling “access”, but ensuring that access is meaningful and equitable in the face of algorithmic design.
Regulatory and Rights Frameworks
UNCRPD’s relevance
The UNCRPD is the foundational human-rights instrument relating to disability. As referenced, Article 9 requires accessibility of ICTs; Article 5 prohibits discrimination based on disability; and Article 4 calls on States Parties to adopt appropriate measures, including international cooperation. ([United Nations Documentation) The UN Special Rapporteur on the rights of persons with disabilities has drawn attention to how AI systems can change the relationship between the State (or private actors) and persons with disabilities — especially where automated decision-making is used in recruitment, social protection or other services. ([UN Regional Information Centre]) States, therefore, have both a regulatory and oversight obligation to prevent algorithmic discrimination and to ensure that AI supports — not undermines — the rights of persons with disabilities.
The EU AI Act and disability
The EU AI Act (entered into force August 2024) provides a risk-based regulatory approach for AI systems. Among its features: the prohibition of certain “unacceptable risk” AI practices (Article 5), obligations for high-risk AI systems (data governance, transparency, human oversight), and notable references to disability and accessibility. For example, Article 5(b) prohibits AI systems that exploit the vulnerabilities of a natural person “due to their … disability”. ([Artificial Intelligence Act] Further, Article 10(5) allows collection of sensitive data (including disability) to evaluate, monitor and mitigate bias. ([arXiv] The EU thus offers a model of combining accessibility, non-discrimination and algorithmic oversight.
India’s context
In India, the rights of persons with disabilities are codified in the Rights of Persons with Disabilities Act, 2016 (RPwD Act). While the Act does not explicitly focus on AI or algorithmic bias, it incorporates the UNCRPD framework (India being a signatory) and mandates non-discrimination, accessibility and equal opportunities. Thus, practitioners and policymakers in India ought to interpret emerging AI systems through the lens of the RPwD Act and UNCRPD obligations. Given India’s rapid digitisation (Government e-services, AI in welfare systems, biometric identification), the issue of algorithmic bias and accessibility is highly material for persons with disabilities in India.
How Algorithmic Bias Manifests in Accessibility Scenarios
Recruitment and employment
AI tools for hiring (resume screening, video interviews, psychometric testing) often involve patterns derived from historical data. If these datasets reflect the historic exclusion of persons with disabilities, the algorithm learns that “disclosure of disability” or “use of assistive technology” correlates with lower success or is anomalous. As one report notes: “since historical data might show fewer hires of candidates who requested workplace accommodations … the system may interpret this as a negative indicator”. ([warden-ai.com]) In India, where data bias and limited disability representation in formal employment persist, recruitment AI may further entrench disadvantage unless corrected.
Assistive technology and communication tools
AI-powered assistive technologies offer major potential: speech-to-text, sign-language avatars, navigation aids, prosthetic-control systems. ([University College London) Yet biases in training data or interface design can exclude users. For instance, gesture-recognition may be trained on normative movements, failing to recognise users with atypical mobility; speech-recognition may mis-transcribe persons with dysarthria or non-standard accents. Unless developers include diverse disability profiles in training and testing, the assistive tools themselves may become inaccessible or unreliable.
Public services and welfare administration
In welfare systems, AI may screen for eligibility, monitor benefits, or allocate resources. Persons with disabilities may be disadvantaged if systems assume normative behaviour or communication patterns. The UN Special Rapporteur warns that AI tools used by authorities may become gatekeepers in processes such as employment or social services. ([UN Regional Information Centre) In India, where Aadhaar-linked digital services and automated verification proliferate, there is a risk that inaccessible interfaces or biased decision logic may deny or delay access for persons with disabilities.
Built-environment, navigation and smart-cities
The synergy of AI and the built environment (smart navigation, accessibility scanners) holds promise. But algorithmic bias may intervene: for example, AI-derived route-planning may favour users who walk, not those who use wheelchairs; computer vision may mis-classify assistive devices; or voice-gate systems may misinterpret sign-language input. Globally, a recent system called “Accessibility Scout” uses machine-learning to identify accessibility concerns in built environments—but even such systems need disability-inclusive training data. ([arXiv]) In India’s rapidly urbanising spaces (metro stations, smart city initiatives), disabled users risk being excluded if AI-based navigation or environment-scanning tools are biased.

Why Disabled People Ought to Know and Act
Legal and rights implications
Persons with disabilities ought to know that algorithmic bias is not merely a technical issue but a rights issue. Under the UNCRPD and the RPwD Act, they are entitled to equality of access, participation and non-discrimination. If an AI system denies them a job interview, misinterprets their assistive communication or makes a decision that excludes them, the outcome may contravene those rights. In Europe, the EU AI Act recognises that vulnerability due to disability is a ground to prohibit certain AI practices (Article 5(b)). ([Artificial Intelligence Act])
Practical implications of accessibility failure
When AI systems are biased, accessibility suffers in concrete ways: exclusion, invisibility, mis-identification, denial of services, or reliance on assistive tools that do not work as intended. For example, a recruitment AI that fails to recognise alternative speech patterns or a building-navigation AI that does not route for wheelchair users. These are not hypothetical — the digital divide is already severe for persons with disabilities. ([TPGi — a Vispero company])
Participation and voice
Disabled people and their representative organisations must insist on participation in the design, development and governance of AI systems. The UN Special Rapporteur emphasises that persons with disabilities are rarely involved in developing AI, thereby increasing the risk of exclusion. ([UN Regional Information Centre] Participation ensures lived experience informs design, testing and deployment — thereby reducing bias and strengthening accessibility.
Awareness of “black-box” systems and recourse
Many AI systems operate as opaque “black boxes” with little transparency. Persons with disabilities ought to know their rights: for example, the right to explanation (in some jurisdictions) and to challenge automated decisions. Whilst India’s specific jurisprudence in this regard is emerging, the EU regime provides a model: for high-risk systems, transparency, human-in-loop oversight, and documentation obligations apply. ([arXiv] Having awareness helps disabled persons, advocates and lawyers to ask relevant questions of system-owners and policymakers.
Recommendations for Policymakers, Designers and Disabled-Rights Advocates
Data practice and inclusive datasets
Ensure that training datasets include persons with disabilities, assistive-technology users, alternative communication modes and diverse disability profiles.
Conduct bias-audits specifically for disability as a protected characteristic: for example, disaggregated outcome analysis of how persons with disabilities fare vis-à-vis non-disabled persons. ([FRA])
Where sensitive data (including disability status) is needed to assess bias, ensure privacy, consent and safeguards (cf. Article 10(5) of EU AI Act). ([arXiv]
Accessibility-by-design and human-centred testing
Design AI systems with accessibility from the outset: include persons with disabilities in usability testing, interface design and deployment scenarios.
For assistive-technology applications, ensure testing across mobility, sensory, cognitive and communication impairment types.
Ensure that the human-machine interface does not assume normative speech, movement or interaction styles.
Transparency, accountability and redress
Developers should publish documentation of system design, training data summary, performance across disability groups and mitigation of bias.
Deployers of high-risk AI systems should integrate human-in-loop oversight and allow for meaningful human review of adverse outcomes.
Disability rights organisations should demand audit reports, accessible documentation and pathways for complaint or redress.
Regulation and policy implementation
In India, policymakers should align AI regulation with disability rights frameworks (RPwD Act, UNCRPD) and mandate accessibility audits for AI systems deployed in public services.
Regulatory bodies (such as data-protection authorities or disability rights commissions) must include algorithmic bias and accessibility in their oversight remit.
As in the EU model, a risk-based classification of AI systems (unacceptable, high, limited, minimal risk) may help Chile, India or other jurisdictions frame governance. ([OECD AI Policy Observatory]
Capacity building and awareness
Disabled persons’ organisations (DPOs) in India and elsewhere should develop technical literacy about AI, algorithmic bias and accessibility implications.
Training modules should be developed for developers, designers and policymakers on disability-inclusive AI.
Collaborative platforms between academia, industry, government and DPOs are needed to research disabled-user-specific AI bias in Indian contexts.
Specific Relevance to India
India’s digital ecosystem is expanding rapidly: e-governance portals, biometric identification (e.g., Aadhaar), AI-driven services for health, education and welfare, and smart-city initiatives. Given this expansion, algorithmic bias poses a heightened risk for persons with disabilities in India.
Firstly, data gaps in India regarding disability are well documented: persons with disabilities are under-represented in formal employment, excluded from many surveys, and often not visible in “mainstream” datasets. Thus, AI systems trained on such data may systematically overlook or misclassify persons with disabilities.
Secondly, accessibility in India remains a significant challenge: although the RPwD Act mandates accessibility of ICT and built environments, the practice is uneven. When AI systems become mediators of access (for example, e-service portals, automated benefit systems, recruitment platforms), any bias in design or data may compound the existing social exclusion of persons with disabilities.
Thirdly, the inclusive AI policy in India remains nascent. Unlike the EU AI Act, India does not yet have a comprehensive AI-regulation scheme that explicitly references disability-bias or high-risk AI for accessibility. Therefore, advocates and policymakers in India ought to press for regulatory clarity, accessibility audits and inclusive design in all AI deployment—especially where the State is involved.
Finally, the Indian disability-rights movement must engage actively with AI governance: ensuring that persons with disabilities have a voice in design, procurement, deployment and oversight of AI systems in India. Without such engagement, AI may become a new vector of exclusion rather than a facilitator of independence and participation.
Conclusion
The promise of artificial intelligence to enhance accessibility for persons with disabilities is real: from speech-recognition and navigation aids to employment-matching and inclusive education. Yet, without careful attention to algorithmic bias, accessibility may remain aspirational rather than realised. Algorithmic bias shapes accessibility when AI systems misrecognise, exclude, mis‐classify or disadvantage persons with disabilities — and this effect is a human-rights concern under the UNCRPD, the RPwD Act and emerging regulatory frameworks such as the EU AI Act.
Disabled persons and their organisations ought to understand that algorithmic bias is not abstract but concrete in accessibility terms. They need to engage, insist on inclusive data, demand transparency, participate in system design and seek accountability. Policymakers and AI developers must embed accessibility by design, integrate disability-inclusive datasets, monitor outcomes by disability status, and adopt governance mechanisms that guard against unfair exclusion.
In India, where digital transformation is swift and disability inclusion remains a critical challenge, the stakes are high. AI systems will increasingly mediate how persons with disabilities access jobs, services, information and public spaces. Without proactive safeguards, algorithmic bias may reinforce existing barriers. But with rights-based regulation, inclusive design and meaningful participation of persons with disabilities, AI can become a powerful tool for accessibility rather than an additional barrier.
In short, accessibility and algorithmic fairness must move together. The structure of AI may be powerful, but it is the human judgment, oversight and commitment to inclusion that shall determine whether persons with disabilities benefit — or are further marginalised. Writers, policy-makers, developers and advocates alike must recognise this intersection and act accordingly.
---
References
  • Building an accessible future for all: AI and the inclusion of Persons with Disabilities. UN RIC. 02 December 2024. ([UN Regional Information Centre]
  • Article 5: Prohibited AI Practices | EU Artificial Intelligence Act. ([Artificial Intelligence Act]
  • Convention on the Rights of Persons with Disabilities. OHCHR. ([OHCHR])
  • Bias in algorithms – Artificial intelligence and discrimination. European Union Agency for Fundamental Rights (FRA). 8 December 2022. ([FRA]
  • AI Act and disability-centred policy: how can we stop perpetuating social exclusion? OECD.AI. 17 May 2023. ([OECD AI Policy Observatory]
  • Digital accessibility and the UN Convention on the Rights of Persons with Disabilities – a conference review. TPGi blog. 19 June 2024. ([TPGi — a Vispero company]
  • Policy brief: Powering Inclusion: Artificial Intelligence and Assistive Technology. UCL. ([University College London]
  • Inclusive AI for people with disabilities: key considerations. Clifford Chance. 6 December 2024. ([Clifford Chance]
  • Algorithmic Discrimination in Health Care: An EU Law Perspective. PMC. 2022. ([PMC]
  • Artificial intelligence and the rights of persons with disabilities. European Disability Forum / FEPH report. 23 February 2022. ([EDF FEPH]

Popular Posts