Saturday, 31 January 2026

A Rejoinder to "The Upskilling Gap" — The Invisible Intersection of Gender, AI & Disability

 To:

Ms. Shravani Prakash, Ms. Tanu M. Goyal, and Ms. Chellsea Lauhka
c/o The Hindu, Chennai / Delhi, India

Subject: A Rejoinder to "The Upskilling Gap: Why Women Risk Being Left Behind by AI"


Dear Authors,

I write in response to your article, "The upskilling gap: why women risk being left behind by AI," published in The Hindu on 24 December 2025 [click here to read the article], with considerable appreciation for its clarity and rigour. Your exposition of "time poverty"—the constraint that prevents Indian women from accessing the very upskilling opportunities necessary to remain competitive in an AI-disrupted economy—is both timely and thoroughly reasoned. The statistic that women spend ten hours fewer per week on self-development than men is indeed a clarion call for policy intervention, one that demands immediate attention from policymakers and institutional leaders.

Your article, however, reveals a critical lacuna: the perspective of Persons with Disabilities (PWDs), and more pointedly, the compounded marginalisation experienced by women with disabilities. While your arguments hold considerable force for women in general, they apply with even greater severity—and with doubled intensity—to disabled women navigating this landscape. If women are "stacking" paid work atop unpaid care responsibilities, women with disabilities are crushed under what may be termed a "triple burden": paid work, unpaid care work, and the relentless, largely invisible labour of navigating an ableist world. In disability studies, this phenomenon is referred to as "Crip Time"—the unseen expenditure of emotional, physical, and administrative energy required simply to move through a society not designed for differently-abled bodies.

1. The "Time Tax" and Crip Time: A Compounded Deficit

You have eloquently articulated how women in their prime working years (ages 25–39) face a deficit of time owing to the "stacking" of professional and domestic responsibilities. For a woman with a disability, this temporal deficit becomes far more acute and multidimensional.

Consider the following invisible labour burdens:

Administrative and Bureaucratic Labour. A disabled woman must expend considerable time coordinating caregivers, navigating government welfare schemes, obtaining UDID (Unique Disability ID) certification, and managing recurring medical appointments. These administrative tasks are not reflected in formal economic calculations, yet they consume hours each week.

Navigation Labour. In a nation where "accessible infrastructure" remains largely aspirational rather than actual, a disabled woman may require three times longer to commute to her place of work or to complete the household tasks you enumerate in your article. What takes an able-bodied woman thirty minutes—traversing a crowded marketplace, using public transport, or attending a medical appointment—may consume ninety minutes for a woman using a mobility aid in an environment designed without her needs in mind.

Emotional Labour. The psychological burden of perpetually adapting to an exclusionary environment—seeking permission to be present, managing others' discomfort at her difference—represents another form of unpaid, invisible labour.

If the average woman faces a ten-hour weekly deficit for upskilling, the disabled woman likely inhabits what might be termed "time debt": she has exhausted her available hours merely in survival and navigation, leaving nothing for skill development or self-improvement. She is not merely "time poor"; she exists in a state of temporal deficit.

2. The Trap of Technoableism: When Technology Becomes the Problem

Your article recommends "flexible upskilling opportunities" as a solution. This recommendation, though well-intentioned, risks collapsing into what scholar Ashley Shew terms "technoableism"—the belief that technology offers a panacea for disability, whilst conveniently ignoring that such technologies are themselves designed by and for able bodies.

The Inaccessibility of "Flexible" Learning. Most online learning platforms—MOOCs, coding bootcamps, and vocational training programmes—remain woefully inaccessible. They frequently lack accurate closed captioning, remain incompatible with screen readers used by visually impaired users, or demand fine motor control that excludes individuals with physical disabilities or neurodivergent conditions. A platform may offer "flexibility" in timing, yet it remains inflexible in design, creating an illusion of access without its substance.

The Burden of Adaptation Falls on the Disabled Person. Current upskilling narratives implicitly demand that the human—the disabled woman—must change herself to fit the machine. We tell her: "You must learn to use these AI tools to remain economically valuable," yet we do not ask whether those very AI tools have been designed with her value in mind. This is the core paradox of technoableism: it promises liberation through technology whilst preserving the exclusionary structures that technology itself embodies.

3. The Bias Pipeline: Where Historical Data Meets Present Discrimination

Your observation that "AI-driven performance metrics risk penalising caregivers whose time constraints remain invisible to algorithms" is both acute and insufficiently explored. Let us examine this with greater precision.

The Hiring Algorithm and the "Employment Gap." Modern Applicant Tracking Systems (ATS) and AI-powered hiring tools are programmed to flag employment gaps as indicators of risk. Consider how these gaps are interpreted differently:

  • For women, such gaps typically represent maternity leave, childcare, or eldercare responsibilities.

  • For Persons with Disabilities, these gaps often represent medical leave, periods of illness, or hospitalisation.

  • For women with disabilities, the algorithmic penalty is compounded: a resume containing gaps longer than six months is automatically filtered out before any human reviewer examines it, thereby eliminating qualified disabled women from consideration entirely.

Research audits have documented this discrimination. In one verified case, hiring algorithms flagged minority candidates disproportionately as needing human review because such candidates—inhibited by systemic bias in how they were evaluated—tended to give shorter responses during video interviews, which the algorithm interpreted as "low engagement".​

Video Interviewing Software and Facial Analysis. Until its removal in January 2021, the video interviewing platform HireVue employed facial analysis to assess candidates' suitability—evaluating eye contact, facial expressions, and speech patterns as proxies for "employability" and honesty. This system exemplified technoableism in its purest form:

  • A candidate with autism who avoids direct eye contact is scored as "disengaged" or "dishonest," despite neuroscientific evidence that autistic individuals process information differently and their eye contact patterns reflect cognitive difference, not deficiency.

  • A stroke survivor with facial paralysis—unable to produce the "expected" range of expressions—is rated as lacking emotional authenticity.

  • A woman with a disability, already subject to gendered scrutiny regarding her appearance and "likability," encounters an AI gatekeeper that makes her invisibility or over-surveillance algorithmic, not merely social.

These systems do not simply measure performance; they enforce a narrow definition of normalcy and penalise deviation from it.

4. Verified Examples: The "Double Glitch" in Action

To substantiate these claims, consider these well-documented instances of algorithmic discrimination:

Speech Recognition and Dysarthria. Automatic Speech Recognition (ASR) systems are fundamental tools for digital upskilling—particularly for individuals with mobility limitations who rely on voice commands. Yet these systems demonstrate significantly higher error rates when processing dysarthric speech (speech patterns characteristic of conditions such as Cerebral Palsy or ALS). Recent research quantifies this disparity:

  • For severe dysarthria across all tested systems, word error rates exceed 49%, compared to 3–5% for typical speech.​

  • Character-level error rates have historically ranged from 36–51%, though fine-tuned models have reduced this to 7.3%.​

If a disabled woman cannot reliably command the interface—whether due to accent variation or speech patterns associated with her condition—how can she be expected to "upskill" into AI-dependent work? The platform itself becomes a barrier.

Facial Recognition and the Intersection of Race and Gender. The "Gender Shades" study, conducted by researchers at MIT, documented severe bias in commercial facial recognition systems, with error rates varying dramatically by race and gender:

  • Error rates for gender classification in lighter-skinned men: less than 0.8%

  • Error rates for gender classification in darker-skinned women: 20.8% to 34.7%​

Amazon Rekognition similarly misclassified 31 percent of darker-skinned women. For a disabled woman of colour seeking employment or accessing digital services, facial recognition systems compound her marginalisation: she is simultaneously rendered invisible (failed detection) or hyper-surveilled (flagged as suspicious).​

The Absence of Disability-Disaggregated Data. Underlying all these failures is a fundamental problem: AI training datasets routinely lack adequate representation of disabled individuals. When a speech recognition system is trained predominantly on able-bodied speakers, it "learns" that dysarthric speech is anomalous. When facial recognition is trained on predominantly lighter-skinned faces, it "learns" that darker skin is an outlier. Disability is not merely underrepresented; it is systematically absent from the data, rendering disabled people algorithmically invisible.

5. Toward Inclusive Policy: Dismantling the Bias Pipeline

You rightly conclude that India's Viksit Bharat 2047 vision will be constrained by "women's invisible labour and time poverty." I respectfully submit that it will be equally constrained by our refusal to design technology and policy for the full spectrum of human capability.

True empowerment cannot mean simply "adding jobs," as your article notes. Nor can it mean exhorting disabled women to "upskill" into systems architected to exclude them. Rather, it requires three concrete interventions:

First, Inclusive Data Collection. Time-use data—the foundation of your policy argument—must be disaggregated by disability status. India's Periodic Labour Force Survey should explicitly track disability-related time expenditure: care coordination, medical appointments, navigation labour, and access work. Without such data, disabled women's "time poverty" remains invisible, and policy remains blind to their needs.

Second, Accessibility by Design, Not Retrofit. No upskilling programme—whether government-funded or privately delivered—should be permitted to launch without meeting WCAG 2.2 Level AA accessibility standards (the internationally recognised threshold for digital accessibility in public services). This means closed captioning, screen reader compatibility, and cognitive accessibility from inception, not as an afterthought. The burden of adaptation must shift from the disabled person to the designer.​

Third, Mandatory Algorithmic Audits for Intersectional Bias. Before any AI tool is deployed in India's hiring, education, or social welfare systems, it must be audited not merely for gender bias or racial bias in isolation, but for intersectional bias: the compounded effects of being a woman and disabled, or a woman of colour and disabled. Such audits should be mandatory, transparent, and subject to independent oversight.

Conclusion: A Truly Viksit Bharat

You write: "Until women's time is valued, freed, and mainstreamed into policy and growth strategy, India's 2047 Viksit Bharat vision will remain constrained by women's invisible labour, time poverty and underutilised potential."

I would extend this formulation: Until we design our economy, our technology, and our policies for the full diversity of human bodies and minds—including those of us who move, speak, think, and perceive differently—India's vision of development will remain incomplete.

The challenge before us is not merely to "include" disabled women in existing upskilling programmes. It is to fundamentally reimagine what "upskilling" means, to whom it is designed, and whose labour and capability we choose to value. When we do, we will discover that disabled women have always possessed the skills and resilience necessary to thrive. Our task is simply to remove the barriers we have constructed.

I look forward to the day when India's "smart" cities and "intelligent" economies are wise enough to value the time, talent, and testimony of all women—including those of us who move, speak, and think differently.

Yours faithfully,

Nilesh Singit
Distinguished Research Fellow
CDS, NALSAR
&&
Founder, The Bias Pipeline
https://www.nileshsingit.org/

Monday, 5 January 2026

How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know

Abstract
In the expanding realm of artificial intelligence (AI), systems that appear neutral may in fact reproduce and amplify bias, with significant consequences for persons with disabilities. This article examines how algorithmic bias interacts with accessibility: for example, by misrecognising disabled bodies or communication styles, excluding assistive technology users, or embedding inaccessible design decisions in automated tools. Using the UNCRPD’s rights-based framework and the EU AI Act’s regulatory model, the piece advances a critical perspective on how disabled people in India and elsewhere must remain vigilant to algorithmic harms and insist on inclusive oversight. Within India’s evolving digital-governance and disability-rights context, accessible AI systems are not optional but a matter of legal and ethical obligation. The essay concludes by offering practical recommendations — including disability-inclusive data practices, human rights impact assessments, transparency and participation of disabled persons’ organisations — for policymakers, designers and civil-society actors. The objective is to ensure that AI becomes a facilitator of accessibility rather than a barrier.
Introduction
Artificial intelligence systems are increasingly embedded in everyday services: from recruitment platforms and credit-scoring tools, to facial-recognition, speech-to-text and navigation aids. At first glance, these systems promise increased efficiency and even accessibility gains. Yet beneath the veneer of “smart automation” lies a persistent problem: algorithmic bias. Such bias refers to systematic and repeatable errors in an AI system that create unfair outcomes for individuals or groups — often those already marginalised. In the context of disability, algorithmic bias can shape accessibility in profound ways: by excluding persons with certain disabilities, misreading assistive communication modes, or embedding stereotyped assumptions into system design.
For disabled persons, accessibility is not a mere convenience; it is a right. Under the UNCRPD, States Parties “shall ensure that persons with disabilities can access information and communications technologies … on an equal basis with others”. ([OHCHR]) Consequently, when AI systems fail to respect accessibility, the failure is both practical and rights-based. Meanwhile, the EU AI Act introduces a regulatory architecture attuned to algorithmic risk and non-discrimination, explicitly covering disability and accessibility considerations. ([Artificial Intelligence Act] This article explores how algorithmic bias shapes accessibility, draws upon rights and regulation frameworks, and reflects on what disabled people (and those engaged in disability rights) ought to know — with special reference to the Indian context.
Understanding Algorithmic Bias and Accessibility
What is algorithmic bias?
Algorithmic bias occurs when an AI system, whether through its data, its model, its deployment context, or its user interface, produces outcomes that are systematically less favourable for certain groups. These groups — by virtue of protected characteristics such as disability — may face unfair exclusion or adverse treatment. In the European context, the European Union Agency for Fundamental Rights (FRA) has noted that “speech-algorithms include strong bias against people … disability …”. ([FRA]) Bias may arise at different stages: data collection (under-representation of disabled persons), model training (failure to include assistive-technology use cases), deployment (system inaccessible for screen-reader users), or continuous feedback (lack of monitoring for disabled-user outcomes). Importantly, bias is not always obvious: it may manifest as “fair on average” but unfair for particular groups.
The accessibility dimension
Accessibility means that persons with disabilities can access, use and benefit from goods, services, environments and information “on an equal basis with others”. Under the UNCRPD (Article 9), States are obliged to take appropriate measures to ensure accessibility of information and communications technologies. ([United Nations Documentation] AI systems that serve as mediators of access — for example, voice-interfaces, image-recognition apps, assistive navigation systems — must therefore be designed to respect accessibility. Yet when algorithmic bias creeps in, accessibility is undermined. Consider a recruitment AI that misinterprets alternative communication modes used by a candidate with cerebral palsy, or a smartphone-app navigation tool that fails to account for a wheelchair user’s needs due to biased training data. The result is exclusion or disadvantage, despite the hosting system being marketed as inclusive.
Intersection of bias and accessibility
Disabled persons may face compounded disadvantage: algorithmic bias interacting with inaccessible design means that even when an AI system is technically available, the outcome may not be equitable. For example, an AI-driven health-screening tool may be calibrated on data from non-disabled populations, thereby misdiagnosing persons with disabilities or failing to accommodate their patterns. The OECD notes that AI systems may discriminate against individuals with “facial differences, gestures … speech impairment” and other disability characteristics. ([OECD AI Policy Observatory] Thus, accessibility is not simply about enabling “access”, but ensuring that access is meaningful and equitable in the face of algorithmic design.
Regulatory and Rights Frameworks
UNCRPD’s relevance
The UNCRPD is the foundational human-rights instrument relating to disability. As referenced, Article 9 requires accessibility of ICTs; Article 5 prohibits discrimination based on disability; and Article 4 calls on States Parties to adopt appropriate measures, including international cooperation. ([United Nations Documentation) The UN Special Rapporteur on the rights of persons with disabilities has drawn attention to how AI systems can change the relationship between the State (or private actors) and persons with disabilities — especially where automated decision-making is used in recruitment, social protection or other services. ([UN Regional Information Centre]) States, therefore, have both a regulatory and oversight obligation to prevent algorithmic discrimination and to ensure that AI supports — not undermines — the rights of persons with disabilities.
The EU AI Act and disability
The EU AI Act (entered into force August 2024) provides a risk-based regulatory approach for AI systems. Among its features: the prohibition of certain “unacceptable risk” AI practices (Article 5), obligations for high-risk AI systems (data governance, transparency, human oversight), and notable references to disability and accessibility. For example, Article 5(b) prohibits AI systems that exploit the vulnerabilities of a natural person “due to their … disability”. ([Artificial Intelligence Act] Further, Article 10(5) allows collection of sensitive data (including disability) to evaluate, monitor and mitigate bias. ([arXiv] The EU thus offers a model of combining accessibility, non-discrimination and algorithmic oversight.
India’s context
In India, the rights of persons with disabilities are codified in the Rights of Persons with Disabilities Act, 2016 (RPwD Act). While the Act does not explicitly focus on AI or algorithmic bias, it incorporates the UNCRPD framework (India being a signatory) and mandates non-discrimination, accessibility and equal opportunities. Thus, practitioners and policymakers in India ought to interpret emerging AI systems through the lens of the RPwD Act and UNCRPD obligations. Given India’s rapid digitisation (Government e-services, AI in welfare systems, biometric identification), the issue of algorithmic bias and accessibility is highly material for persons with disabilities in India.
How Algorithmic Bias Manifests in Accessibility Scenarios
Recruitment and employment
AI tools for hiring (resume screening, video interviews, psychometric testing) often involve patterns derived from historical data. If these datasets reflect the historic exclusion of persons with disabilities, the algorithm learns that “disclosure of disability” or “use of assistive technology” correlates with lower success or is anomalous. As one report notes: “since historical data might show fewer hires of candidates who requested workplace accommodations … the system may interpret this as a negative indicator”. ([warden-ai.com]) In India, where data bias and limited disability representation in formal employment persist, recruitment AI may further entrench disadvantage unless corrected.
Assistive technology and communication tools
AI-powered assistive technologies offer major potential: speech-to-text, sign-language avatars, navigation aids, prosthetic-control systems. ([University College London) Yet biases in training data or interface design can exclude users. For instance, gesture-recognition may be trained on normative movements, failing to recognise users with atypical mobility; speech-recognition may mis-transcribe persons with dysarthria or non-standard accents. Unless developers include diverse disability profiles in training and testing, the assistive tools themselves may become inaccessible or unreliable.
Public services and welfare administration
In welfare systems, AI may screen for eligibility, monitor benefits, or allocate resources. Persons with disabilities may be disadvantaged if systems assume normative behaviour or communication patterns. The UN Special Rapporteur warns that AI tools used by authorities may become gatekeepers in processes such as employment or social services. ([UN Regional Information Centre) In India, where Aadhaar-linked digital services and automated verification proliferate, there is a risk that inaccessible interfaces or biased decision logic may deny or delay access for persons with disabilities.
Built-environment, navigation and smart-cities
The synergy of AI and the built environment (smart navigation, accessibility scanners) holds promise. But algorithmic bias may intervene: for example, AI-derived route-planning may favour users who walk, not those who use wheelchairs; computer vision may mis-classify assistive devices; or voice-gate systems may misinterpret sign-language input. Globally, a recent system called “Accessibility Scout” uses machine-learning to identify accessibility concerns in built environments—but even such systems need disability-inclusive training data. ([arXiv]) In India’s rapidly urbanising spaces (metro stations, smart city initiatives), disabled users risk being excluded if AI-based navigation or environment-scanning tools are biased.

Why Disabled People Ought to Know and Act
Legal and rights implications
Persons with disabilities ought to know that algorithmic bias is not merely a technical issue but a rights issue. Under the UNCRPD and the RPwD Act, they are entitled to equality of access, participation and non-discrimination. If an AI system denies them a job interview, misinterprets their assistive communication or makes a decision that excludes them, the outcome may contravene those rights. In Europe, the EU AI Act recognises that vulnerability due to disability is a ground to prohibit certain AI practices (Article 5(b)). ([Artificial Intelligence Act])
Practical implications of accessibility failure
When AI systems are biased, accessibility suffers in concrete ways: exclusion, invisibility, mis-identification, denial of services, or reliance on assistive tools that do not work as intended. For example, a recruitment AI that fails to recognise alternative speech patterns or a building-navigation AI that does not route for wheelchair users. These are not hypothetical — the digital divide is already severe for persons with disabilities. ([TPGi — a Vispero company])
Participation and voice
Disabled people and their representative organisations must insist on participation in the design, development and governance of AI systems. The UN Special Rapporteur emphasises that persons with disabilities are rarely involved in developing AI, thereby increasing the risk of exclusion. ([UN Regional Information Centre] Participation ensures lived experience informs design, testing and deployment — thereby reducing bias and strengthening accessibility.
Awareness of “black-box” systems and recourse
Many AI systems operate as opaque “black boxes” with little transparency. Persons with disabilities ought to know their rights: for example, the right to explanation (in some jurisdictions) and to challenge automated decisions. Whilst India’s specific jurisprudence in this regard is emerging, the EU regime provides a model: for high-risk systems, transparency, human-in-loop oversight, and documentation obligations apply. ([arXiv] Having awareness helps disabled persons, advocates and lawyers to ask relevant questions of system-owners and policymakers.
Recommendations for Policymakers, Designers and Disabled-Rights Advocates
Data practice and inclusive datasets
Ensure that training datasets include persons with disabilities, assistive-technology users, alternative communication modes and diverse disability profiles.
Conduct bias-audits specifically for disability as a protected characteristic: for example, disaggregated outcome analysis of how persons with disabilities fare vis-à-vis non-disabled persons. ([FRA])
Where sensitive data (including disability status) is needed to assess bias, ensure privacy, consent and safeguards (cf. Article 10(5) of EU AI Act). ([arXiv]
Accessibility-by-design and human-centred testing
Design AI systems with accessibility from the outset: include persons with disabilities in usability testing, interface design and deployment scenarios.
For assistive-technology applications, ensure testing across mobility, sensory, cognitive and communication impairment types.
Ensure that the human-machine interface does not assume normative speech, movement or interaction styles.
Transparency, accountability and redress
Developers should publish documentation of system design, training data summary, performance across disability groups and mitigation of bias.
Deployers of high-risk AI systems should integrate human-in-loop oversight and allow for meaningful human review of adverse outcomes.
Disability rights organisations should demand audit reports, accessible documentation and pathways for complaint or redress.
Regulation and policy implementation
In India, policymakers should align AI regulation with disability rights frameworks (RPwD Act, UNCRPD) and mandate accessibility audits for AI systems deployed in public services.
Regulatory bodies (such as data-protection authorities or disability rights commissions) must include algorithmic bias and accessibility in their oversight remit.
As in the EU model, a risk-based classification of AI systems (unacceptable, high, limited, minimal risk) may help Chile, India or other jurisdictions frame governance. ([OECD AI Policy Observatory]
Capacity building and awareness
Disabled persons’ organisations (DPOs) in India and elsewhere should develop technical literacy about AI, algorithmic bias and accessibility implications.
Training modules should be developed for developers, designers and policymakers on disability-inclusive AI.
Collaborative platforms between academia, industry, government and DPOs are needed to research disabled-user-specific AI bias in Indian contexts.
Specific Relevance to India
India’s digital ecosystem is expanding rapidly: e-governance portals, biometric identification (e.g., Aadhaar), AI-driven services for health, education and welfare, and smart-city initiatives. Given this expansion, algorithmic bias poses a heightened risk for persons with disabilities in India.
Firstly, data gaps in India regarding disability are well documented: persons with disabilities are under-represented in formal employment, excluded from many surveys, and often not visible in “mainstream” datasets. Thus, AI systems trained on such data may systematically overlook or misclassify persons with disabilities.
Secondly, accessibility in India remains a significant challenge: although the RPwD Act mandates accessibility of ICT and built environments, the practice is uneven. When AI systems become mediators of access (for example, e-service portals, automated benefit systems, recruitment platforms), any bias in design or data may compound the existing social exclusion of persons with disabilities.
Thirdly, the inclusive AI policy in India remains nascent. Unlike the EU AI Act, India does not yet have a comprehensive AI-regulation scheme that explicitly references disability-bias or high-risk AI for accessibility. Therefore, advocates and policymakers in India ought to press for regulatory clarity, accessibility audits and inclusive design in all AI deployment—especially where the State is involved.
Finally, the Indian disability-rights movement must engage actively with AI governance: ensuring that persons with disabilities have a voice in design, procurement, deployment and oversight of AI systems in India. Without such engagement, AI may become a new vector of exclusion rather than a facilitator of independence and participation.
Conclusion
The promise of artificial intelligence to enhance accessibility for persons with disabilities is real: from speech-recognition and navigation aids to employment-matching and inclusive education. Yet, without careful attention to algorithmic bias, accessibility may remain aspirational rather than realised. Algorithmic bias shapes accessibility when AI systems misrecognise, exclude, mis‐classify or disadvantage persons with disabilities — and this effect is a human-rights concern under the UNCRPD, the RPwD Act and emerging regulatory frameworks such as the EU AI Act.
Disabled persons and their organisations ought to understand that algorithmic bias is not abstract but concrete in accessibility terms. They need to engage, insist on inclusive data, demand transparency, participate in system design and seek accountability. Policymakers and AI developers must embed accessibility by design, integrate disability-inclusive datasets, monitor outcomes by disability status, and adopt governance mechanisms that guard against unfair exclusion.
In India, where digital transformation is swift and disability inclusion remains a critical challenge, the stakes are high. AI systems will increasingly mediate how persons with disabilities access jobs, services, information and public spaces. Without proactive safeguards, algorithmic bias may reinforce existing barriers. But with rights-based regulation, inclusive design and meaningful participation of persons with disabilities, AI can become a powerful tool for accessibility rather than an additional barrier.
In short, accessibility and algorithmic fairness must move together. The structure of AI may be powerful, but it is the human judgment, oversight and commitment to inclusion that shall determine whether persons with disabilities benefit — or are further marginalised. Writers, policy-makers, developers and advocates alike must recognise this intersection and act accordingly.
---
References
  • Building an accessible future for all: AI and the inclusion of Persons with Disabilities. UN RIC. 02 December 2024. ([UN Regional Information Centre]
  • Article 5: Prohibited AI Practices | EU Artificial Intelligence Act. ([Artificial Intelligence Act]
  • Convention on the Rights of Persons with Disabilities. OHCHR. ([OHCHR])
  • Bias in algorithms – Artificial intelligence and discrimination. European Union Agency for Fundamental Rights (FRA). 8 December 2022. ([FRA]
  • AI Act and disability-centred policy: how can we stop perpetuating social exclusion? OECD.AI. 17 May 2023. ([OECD AI Policy Observatory]
  • Digital accessibility and the UN Convention on the Rights of Persons with Disabilities – a conference review. TPGi blog. 19 June 2024. ([TPGi — a Vispero company]
  • Policy brief: Powering Inclusion: Artificial Intelligence and Assistive Technology. UCL. ([University College London]
  • Inclusive AI for people with disabilities: key considerations. Clifford Chance. 6 December 2024. ([Clifford Chance]
  • Algorithmic Discrimination in Health Care: An EU Law Perspective. PMC. 2022. ([PMC]
  • Artificial intelligence and the rights of persons with disabilities. European Disability Forum / FEPH report. 23 February 2022. ([EDF FEPH]

Friday, 2 January 2026

A Legal and Technical Critique of India’s AI Governance: Guidelines from a Disability Rights Perspective

 

Abstract

Artificial Intelligence (AI) systems risk amplifying existing social exclusions if disabled persons are not explicitly included. India’s current AI governance framework—as evidenced by the India AI Governance Guidelines (I-AIGG)—pursues an “AI for All” vision, yet it omits mandatory accessibility and anti-discrimination safeguards for persons with disabilities (PwDs). This whitepaper examines India’s obligations under the UN Convention on the Rights of Persons with Disabilities (UNCRPD) and the Rights of Persons with Disabilities Act, 2016 (RPwD Act), along with the Supreme Court’s recent Rajive Raturi v. Union of India (2024) ruling, to argue that enforceable rights-based rules must underpin AI policy. We highlight how technical biases (in data, models, and annotations) and regulatory gaps leave disabled Indians vulnerable in education, employment, health, and public services. Benchmarking against the EU Artificial Intelligence Act (Reg. (EU) 2024/1689) and international best practices, we propose concrete legal, regulatory, and technical reforms: mandatory AI accessibility standards (aligned with WCAG/GIGW), high-risk classifications with Disability Impact Assessments (DIAs), dataset audits, inclusive design, and strong institutional accountability (including PwD representation and redress mechanisms). These reforms are designed to translate India’s domestic and international disability rights obligations into binding AI governance that promotes equity, not exclusion.

Introduction

AI-driven tools are rapidly deployed across education, employment, healthcare, public services, and social protection in India. Prominent initiatives like Digital India and Aadhaar modernisation, alongside private-sector AI deployments (in fintech, recruitment, etc.), underscore a national push towards technology-led development. In principle, Indian policy espouses “inclusive” and “human-centric” AI. For example, the newly unveiled India AI Governance Guidelines (I-AIGG) emphasise human-centricity, transparency, and fairness. However, a critical flaw looms: these guidelines treat inclusion as voluntary and vague. They refer only to “marginalised communities” without explicitly defining or safeguarding persons with disabilities

This omission is alarming. Disability rights are not optional extras but are protected by law. India has over 63 million PwDs (per NFHS-5), each with a constitutionally protected right to equality and non-discrimination. Moreover, the UNCRPD (to which India is a State Party) and the RPwD Act(2016) impose affirmative obligations to ensure accessibility to information and technology. For instance, UNCRPD Article 9 mandates that States “[take] appropriate measures” to ensure PwDs have equal access to “information and communications, including information and communications technologies and systems”. Similarly, the RPwD Act requires binding accessibility standards for physical and digital infrastructure (Sections 40–46) and prescribes penalties for non-compliance. 

Critically, India’s Supreme Court has now declared that accessibility cannot be left to aspirational guidelines. In Rajive Raturi v. Union of India (2024), the Court struck down non-binding digital accessibility norms and directed mandatory rulemaking. The judgment reaffirmed that “digital accessibility is a fundamental right” and that reliance on “persuasive guidelines” violates the RPwD Act. It called for uniform, enforceable standards “in consultation with all stakeholders” including PwDs. 

This whitepaper builds on these developments to focus on AI bias against PwDs as a pressing issue. We analyze how algorithms can inadvertently exclude disabled people, review the inadequacies of current policy (especially the I-AIGG), and recommend reforms. These include legal amendments, regulatory mandates, technical safeguards (such as diverse data sets and bias audits), and institutional measures (disability representation, accessible grievance redress). By placing PwD inclusion at the centre of AI governance, India can fulfill its rights-based obligations and prevent a new wave of digital exclusion.

Legal Framework

International Obligations (UNCRPD)

India ratified the UN Convention on the Rights of Persons with Disabilities (UNCRPD) in 2007, making its principles legally binding. Article 4(3) of UNCRPD requires that “[i]n the development and implementation of legislation and policies … concerning issues relating to persons with disabilities, States Parties shall closely consult with and actively involve persons with disabilities, including through their representative organizations”. Thus, disability advocates and technical experts must be part of any AI policy or standards development. 

UNCRPD Article 9 explicitly mandates accessibility to ICT: “States Parties shall take appropriate measures to ensure … persons with disabilities access … information and communications, including information and communications technologies and systems, and to other facilities and services … [with] identification and elimination of obstacles and barriers to accessibility”. Subparagraph (2)(g) specifically instructs States “to promote access for persons with disabilities to new information and communications technologies and systems, including the Internet”. In practical terms, this creates an obligation to embed accessibility into AI systems: for example, user interfaces must accommodate screen-readers or alternative input for blind users, and content must be available in sign-language or captioning for the deaf. Accessibility is thus not a “nice to have” but a treaty-level mandate. 

UNCRPD also establishes general principles against discrimination and for technology development. Article 4(1)(b) requires India “to take all appropriate measures, including legislation, to modify or abolish existing laws, regulations, customs and practices that constitute discrimination against persons with disabilities”. Article 4(1)(f) directs States to promote “universally designed goods, services, equipment and facilities… which should require the minimum possible adaptation” to meet disability needs. Equally, Article 4(1)(g) calls for R&D into new ICTs and assistive technologies suitable for PwDs. In the AI context, these provisions imply that algorithms and digital services must be designed universally (i.e. usable by the widest range of people without special adaptation), and that government should encourage tech that aids disabled users. 

In sum, UNCRPD imposes a rights-based obligation on India to ensure AI systems are accessible and non-discriminatory. As one legal analysis observes, “the Union and Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality”. These obligations extend to AI. Indeed, UNCRPD’s emphasis on “accessible ICTs” makes it clear that any AI-driven platform or service, especially public services and mainstream applications, must accommodate the needs of disabled users as a matter of international law.

Domestic Law (RPwD Act, 2016)

The Rights of Persons with Disabilities Act, 2016 (RPwD Act) is India’s primary disability rights statute. It codifies many of the UNCRPD’s mandates. Chapter VIII of the Act deals with accessibility, reflecting India’s duty to ensure inclusive physical and digital environments. Key provisions include:

  • Section 40 (Accessibility rules): The Central Government shall “formulate rules for persons with disabilities laying down the standards of accessibility for the physical environment, transportation, information and communications, including appropriate technologies and systems, and other facilities and services provided to the public”. This empowers (and indeed requires) the government to prescribe detailed norms—akin to building codes—for ICT and digital services.

  • Section 42 (Access to ICT): The Act mandates that “the appropriate Government shall take measures” to ensure: (i) all audio, print and electronic media content is available in accessible formats; (ii) persons with disabilities have access to electronic media via audio descriptions, sign language, and captioning; (iii) everyday electronic goods/equipment conform to universal design. In practical terms, government websites, mobile apps, online videos etc. must be accessible (e.g. Braille/large-print content, captions on videos) and consumer electronics (phones, ATMs, kiosks) should have features to accommodate impairments.

  • Sections 44–46 (Mandatory compliance): These sections impose strict sanctions for inaccessibility. Section 44 forbids any building permit or occupation certificate if accessibility rules (Section 40 rules) are violated. Section 45 requires all existing public buildings to be made accessible within five years of the rules being notified. Section 46 similarly mandates that public and private “service providers” (e.g. banks, clinics, online portals) must provide services in compliance with accessibility rules within two years. Thus, digital service providers should also be bound by these timelines.

  • Sections 89–90 (Penalties): Violation of any provision of the Act, or rules made under it, attracts fines up to ₹10,000 for a first offence and ₹50,000 (up to ₹500,000) for subsequent offences. While the Act does not specifically single out ICT violations here, its broad penalty clause covers “any contravention… of any rule made thereunder”. Hence, non-compliance with prescribed accessibility standards (including digital) is punishable.

In summary, the RPwD Act creates a clear legal floor: AI products and services (as part of “digital infrastructure”) must adhere to accessibility norms once the government formulates them. This is not optional. As the Supreme Court noted, the Act’s intent is “to use compulsion” to realize accessibility. Failing to issue binding rules would leave all these enforcement provisions inoperable.

Jurisprudence: Rajive Raturi (2024)

The Supreme Court’s decision in Rajive Raturi v. Union of India (2024) is a watershed for digital accessibility. The case challenged the government’s delay in notifying accessibility rules under Section 40 RPwD. A nine-judge bench, led by CJI Chandrachud, delivered a unanimous judgment. Critically for AI governance, the Court held that India’s digital age cannot relegate accessibility to mere “progressive realization” or guidelines. It declared that reliance on non-binding norms is “contrary to the intent of the RPWD Act”. Specifically:

  • The Court struck down portions of Rule 15 in the 2017 RPwD Rules (which had contained “voluntary” guidelines for web accessibility). It found that “Rule 15(1)… provides only persuasive guidelines” and “is ultra vires the scheme and legislative intent of the RPWD Act”. In other words, accessibility rules must be mandatory, non-negotiable standards, not optional targets.

  • The Court directed the Union government to draft binding rules immediately. It ordered that within three months, new mandatory accessibility rules (per Section 40) be delineated, “in consultation with all stakeholders” and expressly mandated involvement of NALSAR’s Centre for Disability Studies. It even insisted on such consultation every two years for updates. This shows judicial recognition that digital accessibility standards must be shaped with input from disabled communities and experts.

  • Importantly, the Court affirmed that accessibility is a right, not a policy objective. It observed: “Creating a minimum floor of accessibility cannot be left to the altar of ‘progressive realization’”. Once rules are prescribed, authorities must enforce them. The judgment specifically instructed government bodies to enforce Sections 44–46 and 89 of the RPwD Act (withholding completion certificates, imposing fines, etc.) if accessibility norms are breached.

  • The Court also elaborated on the substance of accessibility. It emphasized “universal design” principles and comprehensive inclusion: rules must cover all disability categories (“physical, sensory, intellectual, and psychosocial disabilities”), and incorporate assistive technologies (screen readers, audio descriptions, etc.). This broad cross-disability mandate is crucial: it forecloses any notion that, say, color-contrast guidelines (for visual impairment) alone suffice, or that neurodiversity need not be considered.

In sum, Rajive Raturi removed the fence between aspiration and enforcement. For AI governance, its implications are clear: digital tools (including algorithms) must be accessible-by-design, and laws/guidelines lacking teeth are unconstitutional. The judgment’s operative directives create a legal imperative. As observed in the literature, “the Court found that reliance on non-binding guidelines and sectoral discretion violated statutory mandates, instructing creation of enforceable, uniform, standardized rules”. In practical terms, any AI initiative in India now operates under the spectre of Raturi: failure to meet accessibility standards risks legal infirmity.

NALSAR-CDS “Finding Sizes for All” Report

In the context of Raturi, it is instructive to consider the empirical report Finding Sizes for All: Report on the Status of the Right to Accessibility in India (2022) by NALSAR’s Centre for Disability Studies. Although predating the judgment, its findings underscore the deep accessibility gaps that persist. Key takeaways (synthesized here) include:

  • Widespread Non-Compliance: The report found that India’s digital landscape remains largely non-compliant with even basic accessibility norms. In a sample of websites (government, private sector, entertainment, etc.), an average of 116 Web Content Accessibility Guidelines (WCAG) errors per site was recorded, with sectors like entertainment and e-commerce worst of all. (This empirical evidence was also publicized in 2025, showing no sector had near-zero errors.)

  • Inadequate Enforcement: Despite the RPwD Act’s mandates, the report noted that rules were treated as “persuasive guidelines” rather than compulsory. It warned that such toothless regulation “compromises the realization of accessibility rights”. This resonates with the Court’s critique: without enforcement, equality guarantees ring hollow.

  • Cross-Sector Exclusion: Interviews and surveys documented barriers in education (inaccessible digital classrooms and lack of sign-language teachers), employment (job portals that ignore screen-reader needs, rigid location requirements excluding home-bound PWDs), and healthcare (telemedicine platforms without captioning). These real-world examples make clear that algorithmic or digital processes can have life-changing impacts — denying benefits, jobs, or learning to those the system overlooks.

  • Recommendations for Inclusive Design: Among its recommendations, Finding Sizes for All called for mandatory accessibility audits, mainstreaming “reasonable accommodation” in technology procurement, and inclusive data gathering to monitor compliance. These align closely with best practices in algorithmic fairness (e.g. monitoring datasets for diversity of disability-related profiles). The report emphasizes that accessibility is a cross-cutting right: it enables all other rights (education, health, justice).

Although not an AI-specific study, Finding Sizes for All provides crucial context. It documents that, on the ground, disabled Indians already suffer digital exclusion. As one interviewee noted, “I simply cannot access the university’s learning portal with my screen-reader, so I am forced to drop out”. In AI terms, this reflects both data gaps (AI systems were never trained on diverse disability use-cases) and design flaws (interfaces assume able-bodied users). The report’s empirical weight reinforces the need for robust, rights-anchored intervention.

Technical and Data Bias Issues

Algorithmic bias can emerge whenever AI systems are built on incomplete or skewed data, or without inclusive design. For PwDs, several technical issues are critical:

  • Underrepresentation in Data Sets: AI models learn from training data. If persons with disabilities are sparsely represented, or certain disability categories are absent, the model will under-serve or misclassify those groups. For instance, speech recognition trained mostly on voices without impairment may fail on a paralyzed person’s slurred speech. Facial-recognition systems trained on able-bodied faces often falter for users wearing assistive devices (like spectacles or hearing aids). These gaps are compounded by data collection biases: disabled individuals are less likely to be on social media or surveys that feed data-hungry models, and disability status may not be labeled due to privacy or stigma. Internationally, it is recognized that biased training data can “entail discriminatory effects… particularly with regard to … disabilities”.

  • Annotation and Labeling Gaps: Even when data includes PwDs, the annotations may ignore context. For example, an image dataset might label a person as “blind student” in one setting and “visually impaired” in another, but an AI model will struggle if annotations are inconsistent. Worse, labels can encode stereotypes (“disabled = unfit for job”), which models can propagate. There is a lack of inclusive annotation standards that account for diverse disabilities. To correct this, annotation guidelines should explicitly cover disability-related attributes and respect Deaf/Blind culture (e.g. labeling images with alt text that notes visual impairment).

  • Model Fairness Testing: AI fairness metrics (such as equal opportunity or disparate impact ratios) typically test for bias across groups (e.g. gender, race). Disability should be treated as a protected attribute in such tests. For example, a hiring algorithm’s outcomes should be disaggregated by disability status to check if disabled candidates are systematically scored lower. But in many jurisdictions, disability data is considered sensitive (see OECD, GDPR provisions), and collecting it for audits requires care. Nonetheless, as disability rights are constitutionally protected in India, there is scope to treat disability as a statutory exception for bias analysis.

  • Interaction and Interface Issues: Beyond data, AI user interfaces often lack accessibility. Consider a voice-based AI assistant: it may be unusable by a deaf person. An AI-driven website chatbot without text alternatives ignores blind users. Models that assume phone-camera access exclude wheelchair users who cannot hold a device. These are not biases in prediction, but design flaws. Accessibility-by-design requires, for instance, that every AI interface has multi-modal inputs/outputs (text, audio, sign, haptic) and is navigable by assistive tech.

  • Disability Impact Assessments (DIAs): Analogous to Data Protection Impact Assessments (DPIAs), DIAs would require organizations to evaluate how a new AI affects disabled users. A DIA might identify that an AI tool (say, automated interview screening) could disadvantage candidates who reveal disability on resumes. The organization would then be “required to design or adapt the system” to mitigate this (e.g. masking disability in preliminary screening or adjusting interview conditions). Currently India has no DIA mandate; it should be considered as a technical safeguard.

  • Tools and Standards: There exist open-source toolkits (e.g. IBM’s AI Fairness 360, Microsoft’s Fairlearn) that can be extended to include disability metrics. Technical standards like ISO/IEC 22989 (AI concepts) and ISO 9241 (ergonomics) can incorporate disability guidelines. Critically, any AI audit (internal or by regulators) should include checks on data representativeness (e.g. “Does training data include blind users at least in proportion to the population?”), fairness tests per disability category, and accessibility checks (e.g. automated WCAG scanning for front-end).

These technical measures are essential complements to legal ones. As the EU AI Act’s recitals note, unaddressed bias “may create new forms of discriminatory impacts… in particular for persons belonging to certain vulnerable groups, including… disabilities”. Without proactive auditing and design, AI systems can inadvertently scale discrimination. In practice, inclusive practices might require more resources (e.g. collecting data from disabled volunteers, hiring accessibility experts) but they yield better products: as one barrier-free design advocate observes, “accessible design leads to better products for all users”.

Policy and Regulatory Gaps

Despite some general commitments to inclusion, India’s existing AI policy lacks enforceable disability safeguards. Key gaps include:

  • Voluntariness vs. Mandate: The I-AIGG explicitly adopts a voluntary-compliance model. Principles are phrased as recommendations (e.g. companies “should” or “ought” to do this), with no legal sanctions for breach. By contrast, UNCRPD and RPwD provisions are mandatory. This mismatch is unsustainable: as the Raturi Court held, accessibility rules must be binding, not aspirational. As one analysis puts it, “aspirational principles supplant the non-negotiable legal floor guaranteed to persons with disabilities” under law. In sum, guidelines that rely on goodwill (“voluntary inclusive design”) will leave India out of compliance with its own laws.

  • Omission of Disability from Definitions: The I-AIGG never explicitly defines or enumerates “PwDs”. It uses vague references to “marginalized communities”, which obscures disability. Without definition, AI practitioners might assume disability inclusion is optional. In law, however, “persons with disabilities” is a defined category under RPwD (including locomotor, visual, hearing, cognitive, mental, etc.). Policy must mirror that specificity. For example, data protection laws often treat health/disability as sensitive data categories, implying special care for PwDs’ information – yet such categorization is absent in AI guidelines.

  • No Accessibility-by-Design Requirement: The I-AIGG does call for AI to be “understandable by design” (transparency), but it never requires systems to be accessible by design. In practice, an AI application could be fully transparent (explaining its decisions) and still be unusable for a blind or deaf user. By contrast, the EU AI Act explicitly incorporates “accessibility requirements” into high-risk AI standards. India’s guidelines should similarly mandate that UX design from the outset follow standards (e.g. WCAG for web; ITU or IEEE standards for assistive tech).

  • Weak Treatment of Algorithmic Bias: The I-AIGG mentions bias mitigation in broad strokes but does not specifically address disability bias. For instance, it refers to “fair, unbiased” outcomes including for “marginalised communities”. Yet it provides no mechanism to ensure that algorithms do not reproduce ableist assumptions. There is no requirement to audit training data for disability representation or to correct model errors that disproportionately affect PwDs. By contrast, the EU approach explicitly bars exploiting disability “vulnerabilities” (Article 5(1)(b)) and requires dataset bias checks. India’s framework should similarly identify “disability” as a protected trait in bias audits.

  • Inadequate Grievance Redress: The I-AIGG wisely calls for accessible and multi-format complaint mechanisms. However, in practice these are voluntary and company-driven. There is no legal right for a PWD harmed by algorithmic discrimination to demand an investigation or compensation. India needs an algorithmic redress institution or expansion of existing ones (like the CwD or National Commission for Persons with Disabilities) to receive AI-related disability complaints. The Raturi Court itself envisaged governmental enforcement mechanisms (e.g. withholding certificates, imposing fines). Similar enforcement powers must extend to digital services and AI platforms.

In short, the I-AIGG’s rhetoric of fairness and inclusion rings hollow without concrete mandates. The open letter notes that its deficiencies are “ultra vires and constitutionally indefensible” if left uncorrected. Indeed, absent legal teeth, employers or developers may ignore disability entirely until forced by law. As one expert warns, an aspirational principle is insufficient when rights are at stake. India must convert these guidelines into binding rules (or statutory amendments) that shall enforce disability-inclusive design, rather than leaving it to voluntary corporate conscience.

Comparative Benchmarks

To guide reform, we look to international benchmarks. The EU’s new AI Act (Regulation (EU) 2024/1689) is instructive: it adopts a risk-based, rights-protective approach that India can partly emulate. Key lessons include:

  • High-Risk Classification: The EU Act explicitly classifies AI systems in critical social domains as high-risk. Annex III, for example, lists AI used in education (student admissions, performance assessment), employment (recruitment, promotion, monitoring), and essential services (social welfare eligibility, insurance risk scoring). These categories ensure that AI which “decides” in life-impacting areas undergoes strict scrutiny. India has no equivalent classification yet. We should consider formally designating, say, “AI in education, employment, health services, and social protection” as high-risk, triggering mandatory audits, external evaluation, and strict liability for harms.

  • Mandatory Preventive Measures: For EU high-risk AI, providers must implement bias evaluation and risk mitigation (Article 10 & 15). Recitals highlight this: developers must ensure datasets are of high quality and “[examine] possible biases… likely to affect… fundamental rights”. They must then “prevent and mitigate” identified biases. These requirements are binding, not voluntary. India could mandate similar due diligence: any organization deploying high-impact AI must conduct (and file evidence of) disability-inclusive audits. The forthcoming Data Protection Authority or a new AI oversight body could oversee compliance.

  • Explicit Accessibility Obligations: The EU Act reinforces accessibility. Recital (239) states providers are “legally obliged to… ensure persons with disabilities have access… on an equal basis” (quoting UNCRPD) and it “is essential that providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102”. Directive 2016/2102 requires public sector websites and apps to meet WCAG 2.1 AA. Thus, in EU law any AI product (digital service) tied to public functions must be accessible by standard. By contrast, India’s policies mention WCAG only in soft terms. New rules should explicitly tie AI certification (e.g. India’s proposed CE-like marking) to accessibility standards for all digital interfaces.

  • Prohibition of Discriminatory AI: Crucially, EU law outlaws certain exploitative AI practices. Article 5(1)(b) prohibits AI that “exploits vulnerabilities … due to disability … with the objective or effect of materially distorting the behaviour” of that person. This is a direct protection for PwDs, recognizing them as a vulnerable group. India has no corresponding statutory ban. Such explicit proscriptions could be mirrored in Indian law or regulations (for example, in competition/consumer protection norms) to forbid AI that manipulates or excludes users on disability grounds.

  • Enforcement and Penalties: The EU Act ties compliance to CE marking, robust market surveillance, and fines up to €35 million or 7.5% of global turnover for breaches (not shown in excerpts, but publicly known). India should consider similar enforcement teeth in its AI regime. For instance, RPwD Act already has penalties (₹10k–5L) that could be levied on entities that deploy non-compliant AI systems. Sectoral regulators (e.g. RBI, SEBI, IRDAI) could issue binding standards requiring accessible AI and mandate audits.

By benchmarking the EU, we note that inclusive governance means binding obligations, technical specificity, and accountability. Those translate to India as: high-risk definitions anchored in the RPwD framework, compulsory impact assessments (like DIAs), interoperability with existing accessibility law (WCAG/GIGW compliance as legal norms), and robust enforcement mechanisms. The EU model demonstrates that disability inclusion is not a side consideration but a core requirement for trustworthy AI. India ought to adopt these best practices rather than reiterating voluntary pledges.

Enforcement Architecture and Remedies

For reforms to matter, institutional mechanisms must be empowered:

  • Regulatory Bodies and Committees: Current IndiaAI committees (e.g. the Technology Policy Committee chaired by Prof. Ravindran) should include disability rights experts and PwD representatives. Nodal ministries (MeitY, Social Justice & Empowerment) should co-govern AI policies to ensure disability interests. The Raturi court itself mandated NALSAR-CDS involvement in rule drafting; similarly, an ongoing Accessibility Board could oversee AI compliance. Alternatively, existing statutory bodies (National Commission for Persons with Disabilities, SEPs and IBs) should be explicitly given AI oversight powers.

  • Grievance Redress and Remedies: Accessible complaint portals must be institutionalized. For example, the government’s Public Grievance Portal should have a dedicated AI/technology category that is barrier-free. Legal aid cells for PwDs can be trained on AI issues. Federally, one could establish an AI Ombudsman empowered to handle discrimination complaints, issue binding directives, and award damages. In addition, courts must be alert to RPwD Sections 44–46 enforcement: now that Raturi has made rules imminent, courts should enforce fines and construction bans against non-compliant entities.

  • Monitoring and Reporting: India should implement disability-disaggregated data requirements for AI deployments. Similar to its commitment under SDGs to collect inclusive data, the government can require developers to report performance metrics (error rates, usage) by disability category. An independent audit agency (perhaps a wing of the AI Office or a specialized cell in the NCPEDP) could review these reports, much like how financial audits are mandated. Transparent reporting will create public accountability.

  • Capacity Building: Enforcement also means empowering officials and enterprises to comply. The RPwD Act (Sec. 47) mandates disability rights training for public servants. This should be extended to AI regulators, judges, and corporate compliance officers. Training curricula (like for IAS/DST batches) ought to include modules on digital accessibility. Funds should be allocated for government departments to upgrade their AI systems (e.g. NIC portals) to compliance, bridging the digital divide.

  • Leveraging Existing Laws: India already has disability anti-discrimination provisions (e.g. Sec. 3 of RPwD on equality and non-discrimination) and employment rules (equal opportunity policies, workplace accommodations). These can be interpreted to cover algorithmic decisions. For instance, if a reservation-quota seat is assigned by an AI system, denying it to a disabled candidate because of a bias could be challenged under RPwD (much like caste discrimination). Activists could invoke Article 21 (right to life and liberty) to argue that denying access to essential services via AI violates basic rights (the Supreme Court has recognized digital privacy and access as part of Article 21 in related cases).

Collectively, these measures ensure that disability inclusion in AI is not merely aspirational but enforced. As one commentator aptly notes, “accessible design must be embedded throughout the digital transformation journey”. The architecture must facilitate this—from rulemaking chambers down to helpdesk lines.

Implementation Roadmap

Achieving these reforms requires a phased, funded plan:

  1. Immediate Actions (0–6 months):

    • Regulatory Fixes: Issue an executive order or amendment clarifying that all AI policies must comply with the RPwD Act. MeitY (in coordination with the Department of Empowerment of PwDs) should form a task force to draft mandatory AI accessibility guidelines, referencing WCAG 2.2, GIGW/HG21, and Raturi directions. A parallel public consultation (with disability NGOs, tech industry, academia) should be mandated (perhaps as a requirement under Section 4(3) UNCRPD). NALSAR-CDS, disability commissions, and AI experts should be on the drafting panel.

    • Awareness & Training: Initiate orientation sessions for key officials (MeitY secretariat, regulators) on Raturi obligations and inclusive AI. Issue government directives instructing ministries (education, labor, health) to assess any AI in their domain for disability compliance.

  2. Short Term (6–18 months):

    • Rule Notification: Finalize and notify the AI-specific accessibility rules. For example, require all AI-driven public services (digital/online) to meet minimal accessibility criteria (alt-text, captioning, keyboard navigation, etc.) from the date of notification. State regulators (like municipal authorities) should be directed to enforce building and digital permits only if compliance is certified.

    • Institutionalization: Establish a permanent AI and Disability cell within the MeitY/IndiaAI Mission, tasked with oversight. Expand the mandate of the Disability Commissioner (RPwD) to include monitoring AI compliance; create a digital portal to lodge AI+disability grievances.

    • Industry Engagement: Mandate private AI vendors (especially those serving the government or large enterprises) to conduct DIAs and publish summary reports. Encourage creation of accessible assistive-AI solutions (for example, Google and Microsoft have programs for accessibility; India could match them through an Accessible AI Innovation Fund).

  3. Medium Term (2–4 years):

    • Audit and Certification: Roll out an “Accessible AI” certification (akin to CE marking) for high-risk systems. Products/services failing accessibility checks should be de-listed from procurement catalogs. Regularly audit key sectors: e.g. annual accessibility audit of all public education platforms, banking apps, e-governance portals. The findings of each audit should be made public (like a quality barometer).

    • Legal Enforcement: By this stage, begin strict enforcement: levy fines as per RPwD Act and RPwD Rules for non-compliance. For example, an edtech firm continuing an inaccessible platform could face penalties up to ₹5 lakh or disqualification from government contracts. Establish fast-track tribunals or include disability-PWD benches in existing consumer courts to expedite such cases.

    • Capacity Building: Scale up professional training in accessible design (e.g. government-funded MBAs or CE (Continuing Education) courses on inclusive AI). Introduce scholarships for students with disabilities in STEM fields to ensure future tech workforce diversity.

  4. Long Term (5+ years):

    • Review and Upgrade: As technology evolves, periodically update standards (e.g. as VR/AR and IoT become mainstream). Mandate that every three years the government “shall” review AI accessibility rules with stakeholder consultation (echoing Raturi’s triennial clause).

    • Sustained Enforcement: Ensure a sustainable budget for enforcement bodies (e.g. at least 5% of the AI Mission’s budget devoted to accessibility audits). Embed accessibility reviews into national innovation programs (e.g. Startup India, Digital Public Infrastructure) so new projects factor disability needs from inception.

    • Evaluation: Conduct empirical studies (in partnership with academia) on AI’s impact on PwDs (following models like Finding Sizes for All). Use these to tweak policies. Ultimately, India should report on digital accessibility indicators to international forums, demonstrating compliance with UNCRPD and SDGs.

Estimating costs: While exact figures depend on scope, many measures (like additional standards work or embedding experts) have low marginal cost relative to national AI spending. The largest expenses will be retrofitting infrastructure and training. However, surveys show accessible technology can broaden market reach; the BarrierBreak study notes that “accessible design leads to … opportunities to serve a large, underserved customer base”. Thus, the investments can yield economic as well as social returns.

Conclusion: Call to Action

India stands at a crossroads. In emerging AI policy, we must choose between inherited digital exclusion or a transformative, rights-based approach. The Rajive Raturi judgment and RPwD Act give a clear legal mandate: accessibility and inclusion are not discretionary

api.sci.gov.in

. Aligning AI governance with these mandates requires urgent action. The stakes are high. Without binding safeguards, disabled Indians will remain on the margins: denied fair college admission by biased algorithms, excluded from online job recruitment, unable to use smart health kiosks or government apps. That outcome would contravene the very ethos of Article 21 (right to life, inclusive of dignity and liberty) and squander India’s moral obligations under the UNCRPD. As one expert aptly put it, “Accessibility is good business, not charity”. In a nation with 4–5 crore persons seeking work and 62 million with disabilities, inclusive AI is not just lawful—it is economically prudent and ethically imperative. This whitepaper has outlined a roadmap to reorient policy and practice. It is now for India’s leaders—legislators, regulators, technology developers, and civil society—to translate these recommendations into reality. The government shall enact mandatory standards; regulators ought to enforce them with vigor; industry must internalize inclusive design principles; and the judiciary should uphold disabled persons’ rights in the digital sphere. The time to act is now. Only then can we claim that India’s AI revolution truly leaves no one behind.

References

  • N. Singit, “An Open Letter to the Ministry of Electronics and IT: A Critique of the India AI Governance Guidelines…” (13 Nov 2025) (open letter to MeitY)

  • Supreme Court of India, Rajive Raturi v. Union of India, (2024) 8 Nov (No. SC 875)

  • Rights of Persons with Disabilities Act, 2016 (No. 49 of 2016), §§40–46, 89

  • UN Convention on the Rights of Persons with Disabilities (UNCRPD), Arts. 4.3, 9.1, 9.2(g)

  • NALSAR Centre for Disability Studies, Finding Sizes for All: Report on the Status of the Right to Accessibility in India (2022) (citations from executive summary and findings)

  • Ministry of Electronics & IT, India AI Governance Guidelines (2025) (final PDF) (referenced via PIB press release and news analysis)

  • PIB Delhi, “MeitY Unveils India AI Governance Guidelines…” (5 Nov 2025)

  • Regulation (EU) 2024/1689 (AI Act), Recitals 239–240 (accessibility obligations)

  • Art. 5 (prohibited exploitative AI, including disabilities)

  • Art. 15 (bias risk management obligations)

  • BarrierBreak & NCPEDP, BB100 State of Digital Accessibility in India 2025 (study)

SheSR (SheThePeople/CSR Journal), “Rajive Raturi v. Union of India: Accessibility and the Law” (analysis)