Abstract
Artificial Intelligence systems, increasingly deployed across healthcare, employment, and education, encode and amplify technoableism—the ideology that frames disability as a problem requiring technological elimination rather than a matter of civil rights. This article maps how ableist assumptions travel through the AI development pipeline, transforming systemic prejudice into automated exclusion. Drawing upon disability studies scholarship, empirical research on algorithmic bias, and the legal frameworks established under India's Rights of Persons with Disabilities Act 2016 and the United Nations Convention on the Rights of Persons with Disabilities, this investigation demonstrates that bias in AI is not merely technical error but ideological infrastructure. Each stage of the pipeline—from data collection to model evaluation—translates assumptions of normative ability into measurable harm: voice recognition systems fail users with speech disabilities, hiring algorithms discriminate against disabled candidates, and large language models reproduce cultural ableism. Addressing these failures requires not technical debugging alone but structural transformation: mandatory accessibility standards, disability-led participatory design, equity-based evaluation frameworks, and regulatory alignment with the Rajiv Raturi Supreme Court judgment, which established accessibility as an ex-ante duty and fundamental right rather than discretionary accommodation.
Section I: The Ideological Architecture of Digital Exclusion
The integration of Artificial Intelligence into core societal systems—healthcare, hiring, education, and governance—demands rigorous examination of the ideologies governing its design. Bias in AI is not an incidental technical glitch but a societal failure rooted in entrenched prejudices. For persons with disabilities, these biases stem from an ideology termed technoableism, which translates historical and systemic ableism into algorithmic exclusion. Understanding this ideological foundation is essential to addressing structural inequities embedded across the AI development lifecycle.
1.1 Defining Ableism in the Digital Age: From Social Model to Algorithmic Harm
Ableism constitutes discrimination that favours non-disabled persons and operates systematically against disabled persons. This bias structures societal expectations regarding what constitutes "proper" functioning of bodies and minds, profoundly shaping technological imagination—the conceptual limits and objectives established during the design process. Consequently, infrastructure surrounding us, from physical environments to digital systems, reflects assumptions of normative ability, determining what is built and who is expected to benefit.
The critique of this system is articulated through frameworks such as crip technoscience, which consciously integrates Critical Disability Studies with Science and Technology Studies. This framework envisions a world wherein disabled persons are recognised as experts regarding their experiences, their bodies, and the material contexts of their lives. Such academic approaches are indispensable for moving beyond medicalised, deficit-based understandings of disability towards recognising systemic, infrastructure-based failures.
1.2 The Core Tenets of Technoableism: Technology as Elimination, Not Empowerment
Technoableism represents a specific, contemporary manifestation of ableism centred on technology. It operates upon the flawed premise that disability is inherently a problem requiring solution, and that emerging technology constitutes the optimal—if not sole—remedy. This perspective embraces technological power to the extent that it considers elimination of disability a moral good towards which society ought to strive.
This ideology aligns closely with technosolutionism, the pervasive tendency to believe that complex social or structural problems can be resolved neatly through technological tools. When applied to disability, this logic reframes disability not as a matter of civil rights or human diversity but as a technical defect awaiting correction. This mindset leads designers to approach disability from a deficit perspective, frequently developing and "throwing technologies at perceived 'problems'" without consulting the affected community. Examples include sophisticated, high-technology ankle prosthetics that prove excessively heavy for certain users, or complex AI-powered live captioning systems that d/Deaf and hard-of-hearing communities never explicitly requested.
A defining feature of technoableism is its frequent presentation "under the guise of empowerment". Technologies are marketed as tools of liberation or assistance, yet their underlying design reinforces normative biases. This rhetorical strategy renders technological solutions benevolent in appearance whilst simultaneously restricting the self-defined needs and agency of disabled individuals. When end-users fail to adopt these unsolicited solutions, developers habitually attribute the failure to users' lack of compliance or inability, rather than interrogating the flawed, deficit-based premise of the technology itself.
Consequently, if technology's ultimate purpose is defined—implicitly or explicitly—as solving or eliminating disability, then any disabled person whose condition resists neat technological resolution becomes an undesirable system anomaly. This ideological premise grants developers a form of moral licence to exclude non-normative data during development, rationalising the failure to accommodate as a functional requirement necessary for the system's "proper" operation.
Section II: Encoding Ableism: Technoableism Across the AI Bias Pipeline
The transformation of technoableist ideology into measurable, systemic bias occurs along the standard AI development lifecycle, commonly termed the Bias Pipeline. At each stage—from initial data selection to final model evaluation—assumptions of normative ability are translated into computational limitations, producing predictable patterns of exclusion.
2.1 The Architecture of Exclusion: Inheriting Historical Bias
The foundational issue in AI development lies in Assumptions of Normalcy. Technological advancements throughout history, from Industrial Revolution machinery to early computing interfaces, have consistently prioritised the needs and experiences of able-bodied users. This historical context ensures that AI development inherits Historical Bias.. This design bias is pervasive, frequently unconscious, and centres the able-bodied user as the "default".
This centring produces the One-Size-Fits-All Fallacy, wherein developers create products lacking the flexibility and customisable options necessary to accommodate diverse human abilities and preferences. Designing standard keyboards without considering individuals with limited dexterity exemplifies this bias.
2.2 Stage 1: Data Collection and Selection Bias
Bias manifests most overtly at the data collection stage. If data employed to train an AI algorithm is not diverse or representative of real-world populations, the resulting outputs shall inevitably reflect those biases. In the disability context, this manifests as profound exclusion of non-normative inputs.
AI models are trained on large pre-existing datasets that statistically emphasise the majority—the normative population. Data required for systems to recognise or translate inputs from disabled individuals is therefore frequently statistically "outlying". A primary illustration is the performance failure of voice recognition software. These systems routinely struggle to process speech disorders because training data lacks sufficient input from populations with conditions such as amyotrophic lateral sclerosis, cerebral palsy, or other speech impairments[79]. This deliberate or accidental omission of diverse inputs constitutes textbook Selection Bias.
2.3 Stage 2: Data Labelling and Measurement Bias
As datasets are curated and labelled, human subjectivity—or cognitive bias—can permeate the system[56]. This stage is where the ideological requirement for speed and efficiency, deeply embedded in technoableist culture, is encoded as technical constraint.
A particularly harmful example of this systemic ableism is observed in digital employment platforms. Certain systems reject disabled digital workers, such as those engaged on platforms like Amazon Mechanical Turk, because their work speed is judged "below average". Speed is frequently employed as a metric to filter spammers or low-quality workers, but in this context it becomes a discriminatory measure. This failure demonstrates Measurement Bias, wherein performance metrics systematically undervalue contributions falling outside arbitrary, non-disabled performance standards.
The resources required to build and maintain AI systems at scale contribute significantly to this exclusion. Integrating highly specialised, diverse data—such as thousands of voice recordings representing the full spectrum of speech disorders—is substantially more resource-intensive than training on statistically homogenous datasets. Consequently, Selection Bias is frequently driven by economic calculation, prioritising profitable, normative user bases and thereby financially justifying the marginalisation of smaller, diverse populations.
2.4 Stage 3 and 4: Model Training, Evaluation, and Stereotyping Bias
Once trained on imbalanced, non-representative data, AI models exhibit Confirmation Bias, reinforcing historical prejudices by over-relying on established, ableist patterns present in input data. Furthermore, biases can emerge even when models appear unbiased during training, particularly when deployed in complex real-world applications.
The final pipeline stage, model evaluation, is itself susceptible to Evaluation Bias. Benchmarks employed to test performance and "fairness" frequently contribute to bias because they fail to capture the nuances of disability. Current methodologies are incomplete, often focusing exclusively on explicit forms of bias or narrow, specific disability groups, thereby failing to assess the full spectrum of subtle algorithmic harm. This evaluation deficit leads to Out-Group Homogeneity Bias, causing AI systems to generalise individuals from underrepresented disability communities, treating them as more similar than they are and failing to recognise the intersectionality and diversity of disabled experiences.
This systemic failure to account for human variation highlights how ableism functions as an intersectional multiplier of harm. Commercial facial recognition systems, for instance, have error rates as low as 0.8 per cent for light-skinned males, yet these rates soar to 34.7 per cent for dark-skinned females. When disability is added to this equation, data deficits compound exclusion, leading to disproportionately higher failure rates for individuals with multiple marginalised identities, as alleged in the Workday lawsuit regarding discrimination based on disability, age, and race.
The following table summarises how technoableist ideology translates into specific algorithmic errors across the development process:
| AI Pipeline Stage | Technoableist Assumption/Ideology | Resulting AI Bias Type | Example of Exclusionary Impact |
|---|---|---|---|
| Data Collection | The "ideal user" has standardised, normative physical and cognitive inputs. | Selection Bias/Historical Bias | Voice datasets exclude speech disorders; computer vision data lacks atypical bodies |
| Data Labelling/Metrics | Efficiency, speed, and standard output quality are universally valued. | Measurement Bias/Human Decision Bias | Hiring systems reject candidates whose speed is below average; annotators inject stereotypical labels |
| Model Training/Output | Optimal performance is achieved by minimising deviation from the norm. | Confirmation Bias/Stereotyping Bias | Large language models reproduce culturally biased and judgemental assumptions about disability. |
Section III: Manifestation of Bias: Documenting Algorithmic Ableism
The biases encoded in the AI pipeline produce tangible harms for persons with disabilities in everyday digital interactions. These manifestations illustrate how technoableism moves beyond abstract theory to create concrete systemic barriers.
3.1 The Voice Recognition Failure: Algorithmic Erasure of Non-Normative Speech
Perhaps the most salient failure of ableist design is the performance of Automated Speech Recognition technologies. ASR systems routinely struggle to recognise voices of persons with conditions such as amyotrophic lateral sclerosis or cerebral palsy. For users who rely upon voice commands for digital interaction, mobility control, or communication, this failure amounts to complete exclusion from the technological sphere.
Whilst machine learning algorithms have demonstrated high accuracy in detecting the presence of voice disorders in research settings, none have achieved sufficient reliability for robust clinical deployment. This discrepancy arises because research frequently lacks standardised acoustic features and processing algorithms and, critically, datasets employed are not sufficiently generalised for target populations.
This technological failure is a direct consequence of Selection Bias and reinforces profound systemic harm: the technology, ostensibly designed to assist, refuses to acknowledge the user. This algorithmic denial of agency transforms users into objects of data analysis—voice pathologies to be studied—rather than subjects of digital interaction, reinforcing the technoableist view of disabled bodies as inherently flawed and outside the system's operational boundaries.
3.2 Computer Vision and the Perpetuation of Stereotypes
Computer vision systems and generative AI models routinely fail disabled users by reinforcing existing stereotypes and failing to recognise atypical visual inputs. Research indicates that cognitive differences, such as those associated with autism, may involve reduced recognition of perceptually homogenous objects, including faces. AI models trained on normative facial recognition datasets reflect and sometimes exacerbate these difficulties for individuals with atypical facial features or expressions.
Furthermore, generative AI systems—text-to-image or large language models—perpetuate harmful social tropes. Outputs from these systems frequently depict disabled persons with stereotypical accessories (for instance, blind persons shown exclusively wearing dark glasses) or inaccurately portray accessible technologies in unrealistic manners. These systematic biases restrict how persons with disabilities are visually and textually represented in the digital sphere, preventing nuanced understanding of disabled life and reinforcing societal pressure for disability to conform to limited, stereotypical visual signifiers.
3.3 Natural Language Processing and Cognitive Bias in Large Language Models
Natural Language Processing algorithms, which power smart assistants and autocorrect systems, harbour significant implicit bias against persons with disabilities[79]. Researchers have found these biases pervasive across highly utilised, public pretrained language models.
When asked to explain concepts related to disability, large language models frequently provide output that is clinical, judgemental, and founded upon underlying assumptions, rather than offering educational or supportive explanations. This judgemental tone further restricts digital agency, treating users as pathological entities rather than knowledgeable participants.
Moreover, these biases are highly sensitive to cultural context. Studies on Indian language models demonstrated that these models consistently underrated harm caused by ableist statements. By reproducing local cultural biases—such as tolerance for comments linking weight loss to resolution of pain and weakness—the systems misinterpret and overlook genuinely ableist comments. This lack of cross-cultural understanding and contextual nuance demonstrates fundamental failure of generalisation in AI and willingness to integrate and scale pre-existing cultural prejudices.
The collective failure of biased machine learning algorithms to operate reliably in clinical or educational settings carries profound risk. If these flawed models are deployed in high-stakes environments—such as healthcare diagnostics or educational tutoring systems—systemic biases from training data shall directly compromise equity, potentially leading to inaccurate medical evaluations or inadequate educational support for disabled patients and students whose data points were overlooked or excluded.
3.4 High-Stakes Discrimination: Employment and Digital Mobility
The deployment of biased AI directly facilitates socioeconomic marginalisation. AI applicant screening systems have been subject to lawsuits alleging discrimination based on disability alongside race and age, demonstrating how automated systems function as gatekeepers to employment opportunity.
Beyond formal hiring, digital labour platforms actively exclude disabled users. As previously noted, rejection of disabled clickworkers because their performance falls outside normative speed metrics reveals a crucial systemic problem. When platforms impose rigid standards, AI enforces competitive, ableist standards of productivity, creating direct economic marginalisation and barring disabled persons from participating fully in the digital economy.
Section IV: Pathways to Equitable AI: Centring Disability Expertise
To move beyond the limitations of technoableism, AI development must undergo fundamental ideological and methodological transformation, prioritising disability expertise, participatory governance, and equity-based standards.
4.1 The Paradigm Shift: From Deficit-Based to Asset-Based Design
The core of technoableism is the deficit-based approach, which frames disability as a flaw requiring correction. Mitigating this requires complete shift towards asset-based design, wherein technology is developed not to eliminate disability but to enhance capability and inclusion.
This approach mandates recognising that persons with disabilities possess unique, frequently ignored expertise regarding technological interactions and system failures. By prioritising these strengths and lived experiences, developers can create technologies that are genuinely useful and non-technoableist by design. The design process must acknowledge that technology's failure to accommodate a user constitutes failure of the design itself, not failure of the user's body or mind.
4.2 Participatory Design and Governance: The Mandate of "Nothing About Us Without Us"
The fundamental guiding principle for ethical and accessible technology development must be "Nothing About Us Without Us". This commitment requires that disabled community members be included as active partners and decision-makers at every stage of the development process—from initial conceptualisation to final testing and deployment. Development must be premised on interdependence, rejecting the technoableist ideal of total individual technological independence in favour of systems that value mutual support and varied needs.
Inclusion efforts must extend beyond user experience research aimed at maximising competitive advantage. They require maintaining transparency and building genuine trust with the community. Accessibility must be built in as default design principle, rather than treated as remedial, post-hoc checklist requirement for regulatory compliance.
4.3 Standardising Inclusion: Integrating Universal Design and Web Content Accessibility Guidelines Principles
To codify these ethical commitments, AI systems must adhere to rigorous, internationally recognised accessibility standards. The Web Content Accessibility Guidelines 2.2 provide essential technical baseline for AI development. WCAG structures accessibility around four core principles, ensuring that AI content and interfaces are:
1. Perceivable: Information must be presentable in ways all users can perceive, requiring features such as alternative text, captions, and proper colour contrast.
2. Operable: Interface components must be navigable and usable, benefitting users who rely upon keyboard navigation, voice control, or switch devices.
3. Understandable: Information and operation must be comprehensible, mitigating cognitive load through simple, clear language and predictable behaviour.
4. Robust: Content must be interpretable by various user agents and assistive technologies as technology advances, ensuring long-term usability.
Complementing WCAG are the seven principles of Universal Design, which offer broader, holistic framework[56]. Principles such as Equitable Use (designs helpful for diverse abilities) and Tolerance for Error (minimising hazards and adverse consequences) ensure that AI systems accommodate wide ranges of individual preferences and abilities.
Whilst technical standards such as WCAG are vital, progression towards equity requires adoption of equity-based accessibility standards. These standards move beyond technical compliance to actively recognise intersectionality and expertise. This is critical because failure rates are higher for multiply marginalised users. An ethical design strategy must mandate measuring not merely whether technology is accessible, but how equitably it performs across diverse user groups—for instance, measuring accuracy of speech recognition systems for non-normative voices speaking marginalised dialects.
This pursuit of equitable performance requires fundamental re-evaluation of performance metrics. Traditional metrics, such as generalised accuracy or average speed, are inherently biased towards normative performance. New frameworks, such as AccessEval, are necessary to systematically assess disability bias in large language models and other AI systems. These evaluation systems must prioritise measuring absence of social harm and equitable functioning across diverse user groups, rather than optimising marginal gains in generalised population efficiency.
The following table summarises how established design frameworks apply to ethical AI development:
| Framework | Principle | Relevance to AI Ethics and Bias Mitigation |
|---|---|---|
| Universal Design | Equitable Use | Ensuring AI benefits diverse abilities and does not exclude or stigmatise any user group. |
| Universal Design | Flexibility | Accommodating user preferences by offering customisable AI interaction methods (for example, input/output modalities). |
| WCAG 2.2 | Perceivable | Guaranteeing AI outputs (for example, data visualisations, text, audio) can be consumed by all users, including through screen readers and captions. |
| WCAG 2.2 | Operable | Ensuring control mechanisms (for example, prompts, interfaces) can be reliably navigated using keyboard, voice, or switch inputs. |
| WCAG 2.2 | Understandable | Designing AI behaviour and outputs to be comprehensible, simple, and clear, mitigating cognitive bias and confusion. |
| WCAG 2.2 | Robust | Building systems compatible with existing and future assistive technologies, ensuring long-term accessibility and preventing technological obsolescence as a barrier. |
Section V: Conclusion and Recommendations for an Accessible Future
5.1 The Ethical Imperative: Recognising Technoableism as Structural Policy Failure
The analysis demonstrates unequivocally that bias in AI is the scaled, automated extension of technoableism. This pervasive ideology institutionalises the historical exclusion of disabled persons by embedding normative assumptions into computational mechanisms of the AI pipeline. The resultant harms—from voice recognition failures to algorithmic hiring discrimination and propagation of stereotypes—are systematic, not incidental. Addressing this issue demands more than technical debugging; it requires confrontational re-evaluation of foundational ideologies governing design.
In the Indian context, this requirement takes on constitutional urgency. The Supreme Court's landmark judgment in Rajive Raturi v. Union of India established accessibility as an ex-ante duty and fundamental right, holding that Rule 15 of the Rights of Persons with Disabilities Rules 2017 was ultra vires the parent Act because it provided only aspirational guidelines rather than enforceable standards. The Court directed the Union Government to frame mandatory accessibility rules within three months, stating unequivocally that "accessibility is not merely a convenience, but a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally".
This judgment must serve as the foundation for India's AI governance framework. If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare, and civic participation—remain governed by existing, non-specific laws. As the Raturi Court observed, accessibility requires a two-pronged approach: retrofitting existing institutions whilst transforming new infrastructure and future initiatives. AI governance must adopt precisely this logic.
Ultimately, true inclusion requires commitment to systemic change, replacing technoableist fixation on technological independence with principle of human interdependence as core foundation of design.
5.2 Policy, Practice, and Research Recommendations
Based on systemic failures and the necessity for paradigm shift towards asset-based, participatory design, the following recommendations are essential for achieving equitable AI development:
1. Policy Mandates for Data Equity and Validation:
Regulatory bodies must mandate comprehensive data collection protocols, specifically requiring inclusion of non-normative inputs and validation data from the full spectrum of disability communities[79]. This includes requiring highly specialised, diverse validation sets for systems such as ASR to ensure reliability in high-stakes clinical and professional environments. In light of the Raturi judgment, these mandates must be framed not as aspirational guidelines but as enforceable minimum standards.
2. Regulatory Oversight and Mandatory Impact Assessments:
Governments and regulatory bodies must institute mandatory, independent accessibility and bias audits for all high-stakes AI systems (for example, those employed in hiring, housing, healthcare, and education). These audits must be conducted by disabled experts and ensure adherence to WCAG and Universal Design principles throughout the entire development lifecycle, thereby enforcing the "Nothing About Us Without Us" principle. The European Union's Artificial Intelligence Act 2024 provides a model: Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities, whilst Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.
3. Adoption of Equitable Evaluation Metrics:
Developers and auditors must move beyond traditional accuracy and efficiency metrics, which favour normative performance. New frameworks such as AccessEval must be integrated to systematically measure social harm, stereotype reproduction, and equitable functioning of AI across diverse and intersectional user groups. The objective of optimisation must shift from maximising speed to minimising exclusion.
4. Incentivising Asset-Based Participatory Design:
Public and private funding mechanisms ought to be structured to prioritise and financially reward technology development that adheres to genuinely participatory methods. By recognising disabled persons as experts whose unique knowledge accelerates innovation and identifies design failures early, development efforts can move away from unsolicited, deficit-based solutions and build truly inclusive technologies from the ground up.
5. Alignment with Constitutional Mandates:
India's AI governance framework must explicitly align with the Rights of Persons with Disabilities Act 2016, the United Nations Convention on the Rights of Persons with Disabilities, and the Rajive Raturi judgment. NITI Aayog's AI strategy documents must incorporate mandatory accessibility provisions rather than treating disability inclusion as sectoral afterthought. As the Raturi Court emphasised, the State's duty to accessibility is ex-ante and proactive, not dependent upon individual requests. AI policy must embed this principle from inception.
6. Cross-Cultural Competence in AI Systems:
Research demonstrates that AI models fail to recognise ableism across cultural contexts, with Western models overestimating harm whilst Indic models underestimate it. Indian AI governance must mandate cultural competence testing for systems deployed in India, ensuring that models understand how ableism manifests within Indian social structures, including intersections with caste, gender, and class. Training datasets must include representation from Indian disabled communities, and evaluation frameworks must account for culturally specific manifestations of bias.
The conversation about AI in India cannot proceed as though disability is a niche concern or an optional consideration. With 2.74 crore Indians with disabilities—comprising diverse impairment categories across urban and rural contexts, across caste and class divides—the deployment of biased AI systems shall entrench existing inequalities at unprecedented scale. The Raturi judgment has established the floor; AI policy must now build the ceiling. Accessibility here is not afterthought; it is integral architecture. When disability leads, AI learns to listen.
References
- Bias in AI. Chapman University. Accessed November 18, 2025. https://www.chapman.edu/ai/bias-in-ai.aspx
- Ask Disabled People What They Want. It's Not Always Technology. Shew, A. Science Friday, October 2, 2023. https://www.sciencefriday.com/articles/against-technoableism-excerpt/
- Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(1), 40-85. Accessed via NSF Public Access Repository: https://par.nsf.gov/biblio/10165545
- Bias in AI: Examples and 6 Ways to Fix it. AIMultiple Research. Updated November 4, 2025. https://research.aimultiple.com/ai-bias/
- Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Xplore. http://ieeexplore.ieee.org/document/9035527
- Whittaker, M., Alper, M., Bennett, C.L., Hendren, S., Kaziunas, L., Mills, M., Morris, M.R., Rankin, J.L., Rogers, E., Salas, M., and West, S.M. (2019). Disability, Bias, and AI. AI Now Institute. https://ainowinstitute.org/disabilitybiasai-2019.pdf
- Panda, S., Agarwal, A., and Patel, H.L. (2025). AccessEval: Benchmarking Disability Bias in Large Language Models. arXiv preprint arXiv:2509.22703. https://arxiv.org/pdf/2509.22703
- Rajive Raturi v. Union of India and Others, Supreme Court of India, Writ Petition (Civil) No. 4/2005, Judgment dated November 8, 2024. Reported in Disability Rights India. https://disabilityrightsindia.com/supreme-court-in-rajive-raturi-case/
- Panda, S., Agarwal, A., and Patel, H.L. (2025). AccessEval: Benchmarking Disability Bias in Large Language Models. Proceedings of EMNLP 2025. ACL Anthology. https://aclanthology.org/2025.emnlp-main.1937/
- Supreme Court in Rajive Raturi case holds the recommendatory nature of Sectoral Accessibility Guidelines under Rule 15 as ultra vires the RPWD Act, 2016. Vision IAS, November 10, 2024. https://visionias.in/supreme-court-upholds-accessibility-for-pwds-as-a-human-right/
- AI models often fail to identify ableism across cultures. Cornell Data Science Center, October 14, 2025. https://datasciencecenter.cornell.edu/news/ai-models-often-fail-to-identify-ableism-across-cultures
- WCAG Overview. UK Government Accessibility and Inclusive Design Manual, July 8, 2025. https://accessibility.education.gov.uk/wcag-overview
- Phutane, M., Seelam, A., and Vashistha, A. (2025). A Human-Centered Audit of Ableism in Western and Indic Language Models. AAAI Conference on Artificial Intelligence. arXiv preprint. https://arxiv.org/abs/2501.07890
- Introduction to Understanding WCAG 2.2. W3C Web Accessibility Initiative. https://www.w3.org/WAI/WCAG22/Understanding/intro
- Open Letter to NITI Aayog: Urgent Need for Disability-Inclusive AI Regulation. Random Reflexions (Nilesh Singit's blog), November 5, 2025. https://blog.nileshsingit.org/open-letter-to-niti-ayog/