Available for invited lectures, workshops and policy dialogues on accessibility, design and governance.
In the expanding realm of artificial intelligence (AI), systems that appear neutral may in fact reproduce and amplify bias, with significant consequences for persons with disabilities. This article examines how algorithmic bias interacts with accessibility: for example, by misrecognising disabled bodies or communication styles, excluding assistive technology users, or embedding inaccessible design decisions in automated tools. Using the UNCRPD’s rights-based framework and the EU AI Act’s regulatory model, the piece advances a critical perspective on how disabled people in India and elsewhere must remain vigilant to algorithmic harms and insist on inclusive oversight. Within India’s evolving digital-governance and disability-rights context, accessible AI systems are not optional but a matter of legal and ethical obligation. The essay concludes by offering practical recommendations — including disability-inclusive data practices, human rights impact assessments, transparency and participation of disabled persons’ organisations — for policymakers, designers and civil-society actors. The objective is to ensure that AI becomes a facilitator of accessibility rather than a barrier.
Artificial intelligence systems are increasingly embedded in everyday services: from recruitment platforms and credit-scoring tools, to facial-recognition, speech-to-text and navigation aids. At first glance, these systems promise increased efficiency and even accessibility gains. Yet beneath the veneer of “smart automation” lies a persistent problem: algorithmic bias. Such bias refers to systematic and repeatable errors in an AI system that create unfair outcomes for individuals or groups — often those already marginalised. In the context of disability, algorithmic bias can shape accessibility in profound ways: by excluding persons with certain disabilities, misreading assistive communication modes, or embedding stereotyped assumptions into system design.
For disabled persons, accessibility is not a mere convenience; it is a right. Under the UNCRPD, States Parties “shall ensure that persons with disabilities can access information and communications technologies … on an equal basis with others”. ([OHCHR]) Consequently, when AI systems fail to respect accessibility, the failure is both practical and rights-based. Meanwhile, the EU AI Act introduces a regulatory architecture attuned to algorithmic risk and non-discrimination, explicitly covering disability and accessibility considerations. ([Artificial Intelligence Act] This article explores how algorithmic bias shapes accessibility, draws upon rights and regulation frameworks, and reflects on what disabled people (and those engaged in disability rights) ought to know — with special reference to the Indian context.
Understanding Algorithmic Bias and Accessibility
Algorithmic bias occurs when an AI system, whether through its data, its model, its deployment context, or its user interface, produces outcomes that are systematically less favourable for certain groups. These groups — by virtue of protected characteristics such as disability — may face unfair exclusion or adverse treatment. In the European context, the European Union Agency for Fundamental Rights (FRA) has noted that “speech-algorithms include strong bias against people … disability …”. ([FRA]) Bias may arise at different stages: data collection (under-representation of disabled persons), model training (failure to include assistive-technology use cases), deployment (system inaccessible for screen-reader users), or continuous feedback (lack of monitoring for disabled-user outcomes). Importantly, bias is not always obvious: it may manifest as “fair on average” but unfair for particular groups.
Accessibility means that persons with disabilities can access, use and benefit from goods, services, environments and information “on an equal basis with others”. Under the UNCRPD (Article 9), States are obliged to take appropriate measures to ensure accessibility of information and communications technologies. ([United Nations Documentation] AI systems that serve as mediators of access — for example, voice-interfaces, image-recognition apps, assistive navigation systems — must therefore be designed to respect accessibility. Yet when algorithmic bias creeps in, accessibility is undermined. Consider a recruitment AI that misinterprets alternative communication modes used by a candidate with cerebral palsy, or a smartphone-app navigation tool that fails to account for a wheelchair user’s needs due to biased training data. The result is exclusion or disadvantage, despite the hosting system being marketed as inclusive.
Disabled persons may face compounded disadvantage: algorithmic bias interacting with inaccessible design means that even when an AI system is technically available, the outcome may not be equitable. For example, an AI-driven health-screening tool may be calibrated on data from non-disabled populations, thereby misdiagnosing persons with disabilities or failing to accommodate their patterns. The OECD notes that AI systems may discriminate against individuals with “facial differences, gestures … speech impairment” and other disability characteristics. ([OECD AI Policy Observatory] Thus, accessibility is not simply about enabling “access”, but ensuring that access is meaningful and equitable in the face of algorithmic design.
Regulatory and Rights Frameworks
The UNCRPD is the foundational human-rights instrument relating to disability. As referenced, Article 9 requires accessibility of ICTs; Article 5 prohibits discrimination based on disability; and Article 4 calls on States Parties to adopt appropriate measures, including international cooperation. ([United Nations Documentation) The UN Special Rapporteur on the rights of persons with disabilities has drawn attention to how AI systems can change the relationship between the State (or private actors) and persons with disabilities — especially where automated decision-making is used in recruitment, social protection or other services. ([UN Regional Information Centre]) States, therefore, have both a regulatory and oversight obligation to prevent algorithmic discrimination and to ensure that AI supports — not undermines — the rights of persons with disabilities.
The EU AI Act (entered into force August 2024) provides a risk-based regulatory approach for AI systems. Among its features: the prohibition of certain “unacceptable risk” AI practices (Article 5), obligations for high-risk AI systems (data governance, transparency, human oversight), and notable references to disability and accessibility. For example, Article 5(b) prohibits AI systems that exploit the vulnerabilities of a natural person “due to their … disability”. ([Artificial Intelligence Act] Further, Article 10(5) allows collection of sensitive data (including disability) to evaluate, monitor and mitigate bias. ([arXiv] The EU thus offers a model of combining accessibility, non-discrimination and algorithmic oversight.
In India, the rights of persons with disabilities are codified in the Rights of Persons with Disabilities Act, 2016 (RPwD Act). While the Act does not explicitly focus on AI or algorithmic bias, it incorporates the UNCRPD framework (India being a signatory) and mandates non-discrimination, accessibility and equal opportunities. Thus, practitioners and policymakers in India ought to interpret emerging AI systems through the lens of the RPwD Act and UNCRPD obligations. Given India’s rapid digitisation (Government e-services, AI in welfare systems, biometric identification), the issue of algorithmic bias and accessibility is highly material for persons with disabilities in India.
How Algorithmic Bias Manifests in Accessibility Scenarios
AI tools for hiring (resume screening, video interviews, psychometric testing) often involve patterns derived from historical data. If these datasets reflect the historic exclusion of persons with disabilities, the algorithm learns that “disclosure of disability” or “use of assistive technology” correlates with lower success or is anomalous. As one report notes: “since historical data might show fewer hires of candidates who requested workplace accommodations … the system may interpret this as a negative indicator”. ([warden-ai.com]) In India, where data bias and limited disability representation in formal employment persist, recruitment AI may further entrench disadvantage unless corrected.
AI-powered assistive technologies offer major potential: speech-to-text, sign-language avatars, navigation aids, prosthetic-control systems. ([University College London) Yet biases in training data or interface design can exclude users. For instance, gesture-recognition may be trained on normative movements, failing to recognise users with atypical mobility; speech-recognition may mis-transcribe persons with dysarthria or non-standard accents. Unless developers include diverse disability profiles in training and testing, the assistive tools themselves may become inaccessible or unreliable.
In welfare systems, AI may screen for eligibility, monitor benefits, or allocate resources. Persons with disabilities may be disadvantaged if systems assume normative behaviour or communication patterns. The UN Special Rapporteur warns that AI tools used by authorities may become gatekeepers in processes such as employment or social services. ([UN Regional Information Centre) In India, where Aadhaar-linked digital services and automated verification proliferate, there is a risk that inaccessible interfaces or biased decision logic may deny or delay access for persons with disabilities.
The synergy of AI and the built environment (smart navigation, accessibility scanners) holds promise. But algorithmic bias may intervene: for example, AI-derived route-planning may favour users who walk, not those who use wheelchairs; computer vision may mis-classify assistive devices; or voice-gate systems may misinterpret sign-language input. Globally, a recent system called “Accessibility Scout” uses machine-learning to identify accessibility concerns in built environments—but even such systems need disability-inclusive training data. ([arXiv]) In India’s rapidly urbanising spaces (metro stations, smart city initiatives), disabled users risk being excluded if AI-based navigation or environment-scanning tools are biased.
Persons with disabilities ought to know that algorithmic bias is not merely a technical issue but a rights issue. Under the UNCRPD and the RPwD Act, they are entitled to equality of access, participation and non-discrimination. If an AI system denies them a job interview, misinterprets their assistive communication or makes a decision that excludes them, the outcome may contravene those rights. In Europe, the EU AI Act recognises that vulnerability due to disability is a ground to prohibit certain AI practices (Article 5(b)). ([Artificial Intelligence Act])
When AI systems are biased, accessibility suffers in concrete ways: exclusion, invisibility, mis-identification, denial of services, or reliance on assistive tools that do not work as intended. For example, a recruitment AI that fails to recognise alternative speech patterns or a building-navigation AI that does not route for wheelchair users. These are not hypothetical — the digital divide is already severe for persons with disabilities. ([TPGi — a Vispero company])
Disabled people and their representative organisations must insist on participation in the design, development and governance of AI systems. The UN Special Rapporteur emphasises that persons with disabilities are rarely involved in developing AI, thereby increasing the risk of exclusion. ([UN Regional Information Centre] Participation ensures lived experience informs design, testing and deployment — thereby reducing bias and strengthening accessibility.
Many AI systems operate as opaque “black boxes” with little transparency. Persons with disabilities ought to know their rights: for example, the right to explanation (in some jurisdictions) and to challenge automated decisions. Whilst India’s specific jurisprudence in this regard is emerging, the EU regime provides a model: for high-risk systems, transparency, human-in-loop oversight, and documentation obligations apply. ([arXiv] Having awareness helps disabled persons, advocates and lawyers to ask relevant questions of system-owners and policymakers.
Ensure that training datasets include persons with disabilities, assistive-technology users, alternative communication modes and diverse disability profiles.
Conduct bias-audits specifically for disability as a protected characteristic: for example, disaggregated outcome analysis of how persons with disabilities fare vis-à-vis non-disabled persons. ([FRA])
Where sensitive data (including disability status) is needed to assess bias, ensure privacy, consent and safeguards (cf. Article 10(5) of EU AI Act). ([arXiv]
Design AI systems with accessibility from the outset: include persons with disabilities in usability testing, interface design and deployment scenarios.
For assistive-technology applications, ensure testing across mobility, sensory, cognitive and communication impairment types.
Ensure that the human-machine interface does not assume normative speech, movement or interaction styles.
Developers should publish documentation of system design, training data summary, performance across disability groups and mitigation of bias.
Deployers of high-risk AI systems should integrate human-in-loop oversight and allow for meaningful human review of adverse outcomes.
Disability rights organisations should demand audit reports, accessible documentation and pathways for complaint or redress.
In India, policymakers should align AI regulation with disability rights frameworks (RPwD Act, UNCRPD) and mandate accessibility audits for AI systems deployed in public services.
Regulatory bodies (such as data-protection authorities or disability rights commissions) must include algorithmic bias and accessibility in their oversight remit.
As in the EU model, a risk-based classification of AI systems (unacceptable, high, limited, minimal risk) may help Chile, India or other jurisdictions frame governance. ([OECD AI Policy Observatory]
Disabled persons’ organisations (DPOs) in India and elsewhere should develop technical literacy about AI, algorithmic bias and accessibility implications.
Training modules should be developed for developers, designers and policymakers on disability-inclusive AI.
Collaborative platforms between academia, industry, government and DPOs are needed to research disabled-user-specific AI bias in Indian contexts.
India’s digital ecosystem is expanding rapidly: e-governance portals, biometric identification (e.g., Aadhaar), AI-driven services for health, education and welfare, and smart-city initiatives. Given this expansion, algorithmic bias poses a heightened risk for persons with disabilities in India.
Firstly, data gaps in India regarding disability are well documented: persons with disabilities are under-represented in formal employment, excluded from many surveys, and often not visible in “mainstream” datasets. Thus, AI systems trained on such data may systematically overlook or misclassify persons with disabilities.
Secondly, accessibility in India remains a significant challenge: although the RPwD Act mandates accessibility of ICT and built environments, the practice is uneven. When AI systems become mediators of access (for example, e-service portals, automated benefit systems, recruitment platforms), any bias in design or data may compound the existing social exclusion of persons with disabilities.
Thirdly, the inclusive AI policy in India remains nascent. Unlike the EU AI Act, India does not yet have a comprehensive AI-regulation scheme that explicitly references disability-bias or high-risk AI for accessibility. Therefore, advocates and policymakers in India ought to press for regulatory clarity, accessibility audits and inclusive design in all AI deployment—especially where the State is involved.
Finally, the Indian disability-rights movement must engage actively with AI governance: ensuring that persons with disabilities have a voice in design, procurement, deployment and oversight of AI systems in India. Without such engagement, AI may become a new vector of exclusion rather than a facilitator of independence and participation.
The promise of artificial intelligence to enhance accessibility for persons with disabilities is real: from speech-recognition and navigation aids to employment-matching and inclusive education. Yet, without careful attention to algorithmic bias, accessibility may remain aspirational rather than realised. Algorithmic bias shapes accessibility when AI systems misrecognise, exclude, mis‐classify or disadvantage persons with disabilities — and this effect is a human-rights concern under the UNCRPD, the RPwD Act and emerging regulatory frameworks such as the EU AI Act.
Disabled persons and their organisations ought to understand that algorithmic bias is not abstract but concrete in accessibility terms. They need to engage, insist on inclusive data, demand transparency, participate in system design and seek accountability. Policymakers and AI developers must embed accessibility by design, integrate disability-inclusive datasets, monitor outcomes by disability status, and adopt governance mechanisms that guard against unfair exclusion.
In India, where digital transformation is swift and disability inclusion remains a critical challenge, the stakes are high. AI systems will increasingly mediate how persons with disabilities access jobs, services, information and public spaces. Without proactive safeguards, algorithmic bias may reinforce existing barriers. But with rights-based regulation, inclusive design and meaningful participation of persons with disabilities, AI can become a powerful tool for accessibility rather than an additional barrier.
In short, accessibility and algorithmic fairness must move together. The structure of AI may be powerful, but it is the human judgment, oversight and commitment to inclusion that shall determine whether persons with disabilities benefit — or are further marginalised. Writers, policy-makers, developers and advocates alike must recognise this intersection and act accordingly.
---