Showing posts with label technoableism. Show all posts
Showing posts with label technoableism. Show all posts

Saturday, 31 January 2026

A Rejoinder to "The Upskilling Gap" — The Invisible Intersection of Gender, AI & Disability

 To:

Ms. Shravani Prakash, Ms. Tanu M. Goyal, and Ms. Chellsea Lauhka
c/o The Hindu, Chennai / Delhi, India

Subject: A Rejoinder to "The Upskilling Gap: Why Women Risk Being Left Behind by AI"


Dear Authors,

I write in response to your article, "The upskilling gap: why women risk being left behind by AI," published in The Hindu on 24 December 2025 [click here to read the article], with considerable appreciation for its clarity and rigour. Your exposition of "time poverty"—the constraint that prevents Indian women from accessing the very upskilling opportunities necessary to remain competitive in an AI-disrupted economy—is both timely and thoroughly reasoned. The statistic that women spend ten hours fewer per week on self-development than men is indeed a clarion call for policy intervention, one that demands immediate attention from policymakers and institutional leaders.

Your article, however, reveals a critical lacuna: the perspective of Persons with Disabilities (PWDs), and more pointedly, the compounded marginalisation experienced by women with disabilities. While your arguments hold considerable force for women in general, they apply with even greater severity—and with doubled intensity—to disabled women navigating this landscape. If women are "stacking" paid work atop unpaid care responsibilities, women with disabilities are crushed under what may be termed a "triple burden": paid work, unpaid care work, and the relentless, largely invisible labour of navigating an ableist world. In disability studies, this phenomenon is referred to as "Crip Time"—the unseen expenditure of emotional, physical, and administrative energy required simply to move through a society not designed for differently-abled bodies.

1. The "Time Tax" and Crip Time: A Compounded Deficit

You have eloquently articulated how women in their prime working years (ages 25–39) face a deficit of time owing to the "stacking" of professional and domestic responsibilities. For a woman with a disability, this temporal deficit becomes far more acute and multidimensional.

Consider the following invisible labour burdens:

Administrative and Bureaucratic Labour. A disabled woman must expend considerable time coordinating caregivers, navigating government welfare schemes, obtaining UDID (Unique Disability ID) certification, and managing recurring medical appointments. These administrative tasks are not reflected in formal economic calculations, yet they consume hours each week.

Navigation Labour. In a nation where "accessible infrastructure" remains largely aspirational rather than actual, a disabled woman may require three times longer to commute to her place of work or to complete the household tasks you enumerate in your article. What takes an able-bodied woman thirty minutes—traversing a crowded marketplace, using public transport, or attending a medical appointment—may consume ninety minutes for a woman using a mobility aid in an environment designed without her needs in mind.

Emotional Labour. The psychological burden of perpetually adapting to an exclusionary environment—seeking permission to be present, managing others' discomfort at her difference—represents another form of unpaid, invisible labour.

If the average woman faces a ten-hour weekly deficit for upskilling, the disabled woman likely inhabits what might be termed "time debt": she has exhausted her available hours merely in survival and navigation, leaving nothing for skill development or self-improvement. She is not merely "time poor"; she exists in a state of temporal deficit.

2. The Trap of Technoableism: When Technology Becomes the Problem

Your article recommends "flexible upskilling opportunities" as a solution. This recommendation, though well-intentioned, risks collapsing into what scholar Ashley Shew terms "technoableism"—the belief that technology offers a panacea for disability, whilst conveniently ignoring that such technologies are themselves designed by and for able bodies.

The Inaccessibility of "Flexible" Learning. Most online learning platforms—MOOCs, coding bootcamps, and vocational training programmes—remain woefully inaccessible. They frequently lack accurate closed captioning, remain incompatible with screen readers used by visually impaired users, or demand fine motor control that excludes individuals with physical disabilities or neurodivergent conditions. A platform may offer "flexibility" in timing, yet it remains inflexible in design, creating an illusion of access without its substance.

The Burden of Adaptation Falls on the Disabled Person. Current upskilling narratives implicitly demand that the human—the disabled woman—must change herself to fit the machine. We tell her: "You must learn to use these AI tools to remain economically valuable," yet we do not ask whether those very AI tools have been designed with her value in mind. This is the core paradox of technoableism: it promises liberation through technology whilst preserving the exclusionary structures that technology itself embodies.

3. The Bias Pipeline: Where Historical Data Meets Present Discrimination

Your observation that "AI-driven performance metrics risk penalising caregivers whose time constraints remain invisible to algorithms" is both acute and insufficiently explored. Let us examine this with greater precision.

The Hiring Algorithm and the "Employment Gap." Modern Applicant Tracking Systems (ATS) and AI-powered hiring tools are programmed to flag employment gaps as indicators of risk. Consider how these gaps are interpreted differently:

  • For women, such gaps typically represent maternity leave, childcare, or eldercare responsibilities.

  • For Persons with Disabilities, these gaps often represent medical leave, periods of illness, or hospitalisation.

  • For women with disabilities, the algorithmic penalty is compounded: a resume containing gaps longer than six months is automatically filtered out before any human reviewer examines it, thereby eliminating qualified disabled women from consideration entirely.

Research audits have documented this discrimination. In one verified case, hiring algorithms flagged minority candidates disproportionately as needing human review because such candidates—inhibited by systemic bias in how they were evaluated—tended to give shorter responses during video interviews, which the algorithm interpreted as "low engagement".​

Video Interviewing Software and Facial Analysis. Until its removal in January 2021, the video interviewing platform HireVue employed facial analysis to assess candidates' suitability—evaluating eye contact, facial expressions, and speech patterns as proxies for "employability" and honesty. This system exemplified technoableism in its purest form:

  • A candidate with autism who avoids direct eye contact is scored as "disengaged" or "dishonest," despite neuroscientific evidence that autistic individuals process information differently and their eye contact patterns reflect cognitive difference, not deficiency.

  • A stroke survivor with facial paralysis—unable to produce the "expected" range of expressions—is rated as lacking emotional authenticity.

  • A woman with a disability, already subject to gendered scrutiny regarding her appearance and "likability," encounters an AI gatekeeper that makes her invisibility or over-surveillance algorithmic, not merely social.

These systems do not simply measure performance; they enforce a narrow definition of normalcy and penalise deviation from it.

4. Verified Examples: The "Double Glitch" in Action

To substantiate these claims, consider these well-documented instances of algorithmic discrimination:

Speech Recognition and Dysarthria. Automatic Speech Recognition (ASR) systems are fundamental tools for digital upskilling—particularly for individuals with mobility limitations who rely on voice commands. Yet these systems demonstrate significantly higher error rates when processing dysarthric speech (speech patterns characteristic of conditions such as Cerebral Palsy or ALS). Recent research quantifies this disparity:

  • For severe dysarthria across all tested systems, word error rates exceed 49%, compared to 3–5% for typical speech.​

  • Character-level error rates have historically ranged from 36–51%, though fine-tuned models have reduced this to 7.3%.​

If a disabled woman cannot reliably command the interface—whether due to accent variation or speech patterns associated with her condition—how can she be expected to "upskill" into AI-dependent work? The platform itself becomes a barrier.

Facial Recognition and the Intersection of Race and Gender. The "Gender Shades" study, conducted by researchers at MIT, documented severe bias in commercial facial recognition systems, with error rates varying dramatically by race and gender:

  • Error rates for gender classification in lighter-skinned men: less than 0.8%

  • Error rates for gender classification in darker-skinned women: 20.8% to 34.7%​

Amazon Rekognition similarly misclassified 31 percent of darker-skinned women. For a disabled woman of colour seeking employment or accessing digital services, facial recognition systems compound her marginalisation: she is simultaneously rendered invisible (failed detection) or hyper-surveilled (flagged as suspicious).​

The Absence of Disability-Disaggregated Data. Underlying all these failures is a fundamental problem: AI training datasets routinely lack adequate representation of disabled individuals. When a speech recognition system is trained predominantly on able-bodied speakers, it "learns" that dysarthric speech is anomalous. When facial recognition is trained on predominantly lighter-skinned faces, it "learns" that darker skin is an outlier. Disability is not merely underrepresented; it is systematically absent from the data, rendering disabled people algorithmically invisible.

5. Toward Inclusive Policy: Dismantling the Bias Pipeline

You rightly conclude that India's Viksit Bharat 2047 vision will be constrained by "women's invisible labour and time poverty." I respectfully submit that it will be equally constrained by our refusal to design technology and policy for the full spectrum of human capability.

True empowerment cannot mean simply "adding jobs," as your article notes. Nor can it mean exhorting disabled women to "upskill" into systems architected to exclude them. Rather, it requires three concrete interventions:

First, Inclusive Data Collection. Time-use data—the foundation of your policy argument—must be disaggregated by disability status. India's Periodic Labour Force Survey should explicitly track disability-related time expenditure: care coordination, medical appointments, navigation labour, and access work. Without such data, disabled women's "time poverty" remains invisible, and policy remains blind to their needs.

Second, Accessibility by Design, Not Retrofit. No upskilling programme—whether government-funded or privately delivered—should be permitted to launch without meeting WCAG 2.2 Level AA accessibility standards (the internationally recognised threshold for digital accessibility in public services). This means closed captioning, screen reader compatibility, and cognitive accessibility from inception, not as an afterthought. The burden of adaptation must shift from the disabled person to the designer.​

Third, Mandatory Algorithmic Audits for Intersectional Bias. Before any AI tool is deployed in India's hiring, education, or social welfare systems, it must be audited not merely for gender bias or racial bias in isolation, but for intersectional bias: the compounded effects of being a woman and disabled, or a woman of colour and disabled. Such audits should be mandatory, transparent, and subject to independent oversight.

Conclusion: A Truly Viksit Bharat

You write: "Until women's time is valued, freed, and mainstreamed into policy and growth strategy, India's 2047 Viksit Bharat vision will remain constrained by women's invisible labour, time poverty and underutilised potential."

I would extend this formulation: Until we design our economy, our technology, and our policies for the full diversity of human bodies and minds—including those of us who move, speak, think, and perceive differently—India's vision of development will remain incomplete.

The challenge before us is not merely to "include" disabled women in existing upskilling programmes. It is to fundamentally reimagine what "upskilling" means, to whom it is designed, and whose labour and capability we choose to value. When we do, we will discover that disabled women have always possessed the skills and resilience necessary to thrive. Our task is simply to remove the barriers we have constructed.

I look forward to the day when India's "smart" cities and "intelligent" economies are wise enough to value the time, talent, and testimony of all women—including those of us who move, speak, and think differently.

Yours faithfully,

Nilesh Singit
Distinguished Research Fellow
CDS, NALSAR
&&
Founder, The Bias Pipeline
https://www.nileshsingit.org/

Saturday, 27 December 2025

Disability-Smart Prompts: Challenging ableism in everyday AI use

 

Abstract

Artificial intelligence systems have emerged as transformative technologies capable of reshaping social, economic, and institutional practices across multiple domains. However, empirical investigations reveal that Large Language Models (LLMs) and other frontier AI systems consistently generate discriminatory outputs targeting disabled persons, with disabled candidates experiencing between 1.15 to 58 times more ableist bias compared with non-disabled counterparts. This paper examines the mechanisms through which ableist bias becomes embedded in AI systems and proposes disability-centred prompting frameworks as harm reduction strategies. Drawing upon disability studies scholarship, empirical research on AI bias, and legal frameworks established under India’s Rights of Persons with Disabilities Act, 2016 (RPD Act) and the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), this investigation demonstrates that prompting practices constitute epistemological interventions capable of either reinforcing or mitigating discriminatory outputs. The paper argues that comprehensive solutions require intervention at multiple levels—from individual prompt formulation to systemic changes in data practices, algorithmic design, participatory governance, and regulatory oversight. Critical examination of disability models, intersectional analysis of compounding marginalisations, and operationalisation of rights-based frameworks offer essential pathways toward epistemic justice in AI systems.

Keywords: disability, artificial intelligence, ableism, bias, accessibility, prompting, social model, epistemic justice, India’s Rights of Persons with Disabilities Act

Introduction: The Invisible Architecture of Bias

Artificial intelligence has emerged as one of the most transformative technologies of the twenty-first century, yet it carries within its architecture the prejudices and assumptions of the societies that created it. Large Language Models (LLMs), whilst demonstrating remarkable capabilities in text generation and information processing, do not inherently possess the capacity to distinguish between respectful and discriminatory prompts . These systems operate on statistical probability, continuing patterns established in their training data rather than interrogating the ethical implications of user queries. This fundamental limitation presents a significant challenge: users often assume AI objectivity, whilst AI systems merely replicate the biases embedded within their prompts and training datasets .

Within the context of disability rights and accessibility, this dynamic becomes particularly troubling. Recent empirical investigations have revealed that disabled candidates experience substantially higher rates of ableist harm in AI-generated content, with disabled individuals facing between 1.15 to 58 times more ableist bias compared with baseline candidates . Furthermore, nearly 99.7 per cent of all disability-related conversations generated by frontier LLMs contained at least one form of measurable ableist harm . These findings underscore a critical reality: unless users actively interrogate how they formulate queries, AI systems will continue to reproduce and amplify discriminatory assumptions about disabled persons.

This article examines how prompting practices can either reinforce or mitigate bias in AI responses through the lens of disability-centred design principles. The argument advanced herein is not that syntactically perfect prompts will resolve all systemic issues. Rather, if society fails to critically examine the epistemological frameworks embedded within user queries, it becomes impossible to address the discriminatory outputs these queries generate. This investigation draws upon disability studies scholarship , recent empirical research on AI bias , and the legal frameworks established under India’s Rights of Persons with Disabilities Act, 2016 (RPD Act) and the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD).

Why Prompting Matters: Language as Epistemological Framework

Prompts establish the conceptual boundaries within which AI models formulate responses. The linguistic framing of a query determines not merely the content of the answer but the underlying model of disability that shapes that content . Consider the substantive difference between two ostensibly similar queries: Prompt A asks, “Explain how disabled people can overcome their limitations to perform everyday tasks,” whilst Prompt B requests, “Explain how society can ensure disabled persons have equitable access to everyday environments and services.” Prompt A situates disability within the medical model, conceptualising it as a personal deficit requiring individual adaptation or remediation. This framework places responsibility upon disabled individuals to “overcome” their impairments, implicitly positioning disability as abnormal and undesirable. By contrast, Prompt B adopts the social model of disability, locating barriers within societal structures, policies, and design failures rather than within individual bodies .

This shift in linguistic framing produces fundamentally different AI responses, as the model aligns its output with the epistemological assumptions embedded in the prompt. Recent research has demonstrated that disability models profoundly influence AI system design and bias mechanisms. Newman-Griffis and colleagues (2023) established that definitions of disability adopted during fundamental design stages determine not only data selection and algorithmic choices but also operational deployment strategies, leading to distinct biases and downstream effects. When AI developers operationalise disability through a medical lens, systems tend to emphasise individual deficits and remediation; when designers adopt social or rights-based models, systems more readily identify environmental barriers and structural inequities.

This matters because prompting constitutes a form of political discourse. Language does not merely describe reality; it constructs the frameworks through which reality is understood and acted upon. If a user formulates a query that positions disabled persons as inherently limited, the AI system will generate responses that reinforce this assumption, potentially disseminating discriminatory perspectives to thousands of users. Conversely, prompts grounded in disability rights frameworks can elicit responses that centre autonomy, access, and structural accountability.

The Medical Model versus the Social Model in AI Contexts

Understanding the distinction between disability models is essential for formulating non-discriminatory prompts. The medical model conceptualises disability as pathology located within individual bodies, requiring diagnosis, treatment, and normalisation . This model has historically dominated healthcare, policy, and public discourse, positioning disabled persons as patients requiring intervention rather than citizens entitled to rights . Within AI contexts, the medical model manifests when systems frame disability as deficiency, generate “inspirational” narratives about “overcoming” impairments, or suggest curative interventions as primary solutions .

By contrast, the social model of disability, which underpins both the UNCRPD and India’s RPD Act 2016, posits that disability arises from the interaction between individual impairments and environmental, attitudinal, and institutional barriers . Under this framework, disability is not an individual problem but a societal failure to provide equitable access. The RPD Act defines persons with disabilities as those “with long term physical, mental, intellectual or sensory impairment which, in interaction with barriers, hinders full and effective participation in society equally with others.” This definition explicitly recognises that barriers—including communicational, cultural, economic, environmental, institutional, political, social, attitudinal, and structural factors—create disability through exclusion .

AI systems trained predominantly on data reflecting medical model assumptions will generate outputs that pathologise disability, emphasise individual adaptation, and overlook systemic barriers . Recent scoping reviews have confirmed that AI research exhibits “a high prevalence of a narrow medical model of disability and an ableist perspective,” raising concerns about perpetuating biases and discrimination . To counter these tendencies, users must formulate prompts that explicitly invoke social model frameworks and rights-based approaches.

Empirical Evidence: Measuring Ableist Bias in LLM Outputs

Recent comprehensive audits of frontier LLMs have quantified the extent of ableist bias in AI-generated content. Phutane and colleagues (2025) introduced the ABLEIST framework (Ableism, Inspiration, Superhumanisation, and Tokenism), comprising eight distinct harm metrics grounded in disability studies literature . Their investigation, spanning 2,820 hiring scenarios across six LLMs and diverse disability, gender, nationality, and caste profiles, yielded alarming findings.

Disabled candidates experienced dramatically elevated rates of ableist harm across multiple dimensions . Specific harm patterns emerged for different disability categories: blind candidates faced increased technoableism (the assumption that disabled persons cannot use technology competently), whilst autistic candidates experienced heightened superhumanisation (the stereotype that neurodivergent individuals possess exceptional abilities in narrow domains). Critically, state-of-the-art toxicity detection models failed to recognise these intersectional ableist harms, demonstrating significant limitations in current safety tools .

The research further revealed that intersectional marginalisations compound ableist bias. When disability intersected with marginalised gender and caste identities, intersectional harm metrics (inspiration porn, superhumanisation, tokenism) increased by 10-51 per cent for gender and caste-marginalised disabled candidates, compared with only 6 per cent for dominant identities . This finding confirms the theoretical predictions of intersectionality frameworks: discrimination operates through multiple, compounding axes of marginalisation .

Additional research has demonstrated that AI-generated image captions, resume screening algorithms, and conversational systems consistently exhibit disability bias . Studies have documented that LLM-based hiring systems embed biases against resumes signalling disability, whilst generative AI chatbots demonstrate quantifiable ability bias, often excluding disabled persons from generated responses . These findings underscore the pervasiveness of ableist assumptions across diverse AI applications.

The Rights-Based Prompting Framework: Operationalising the RPD Act 2016

India’s Rights of Persons with Disabilities Act, 2016, provides a robust legal framework for disability rights, having been enacted to give effect to the UNCRPD. The Act defines disability through a social and relational lens, establishes enforceable rights with punitive measures for violations, and expands recognised disability categories from seven to twenty-one. Users can operationalise RPD principles when formulating AI prompts to generate rights-based, non-discriminatory outputs.

The Act establishes several critical principles relevant to prompt formulation. First, non-discrimination and equality provisions (Sections 3-5) establish that disabled persons possess inviolable rights to non-discrimination and equal treatment. Prompts should frame disability rights as non-negotiable entitlements rather than charitable concessions. Instead of asking “How can we help disabled people access services?” users should formulate: “What legal obligations do service providers have under Section 46 of the RPD Act 2016 to ensure accessibility for disabled persons?”

Second, the Act’s definition of reasonable accommodation (Section 2) includes “necessary and appropriate modification and adjustments not imposing a disproportionate or undue burden,” consistent with UNCRPD Article 2 . Prompts should invoke this principle explicitly: “Explain how employers can implement reasonable accommodations for employees with disabilities under the RPD Act 2016, providing specific examples across diverse disability categories.”

Third, provisions regarding access to justice (Section 12) ensure that disabled persons can exercise the right to access courts, tribunals, and other judicial bodies without discrimination . Prompts concerning legal processes should centre this right: “Describe how judicial systems can ensure accessible court proceedings for disabled persons, including documentation formats and communication support, as mandated by Section 12 of the RPD Act 2016.”

Fourth, Chapter III of the Act mandates that appropriate governments ensure accessibility in physical environments, transportation, information and communications technology, and other facilities . Prompts should reference these obligations: “Identify specific digital accessibility requirements under the RPD Act 2016 for government websites and mobile applications, including compliance timelines and enforcement mechanisms.”

Mitigating Intersectional Bias: Caste, Gender, and Disability in the Indian Context

The empirical evidence regarding intersectional disability bias has particular salience in the Indian context, where caste-based discrimination compounds disability marginalisation . Research has documented that when disability intersects with marginalised caste and gender identities, ableist harms increase substantially, with tokenism rising significantly when gender minorities are included and further intensifying with caste minorities .

India’s constitutional framework and the RPD Act recognise the need for intersectional approaches to disability rights. Prompts formulated for Indian contexts should explicitly acknowledge these intersecting marginalisations: “Analyse how the intersection of disability, caste, and gender affects access to employment opportunities in India, referencing relevant provisions of the RPD Act 2016 and constitutional protections against discrimination on multiple grounds.” Furthermore, prompts should interrogate how AI systems may replicate caste-based prejudices when processing disability-related queries: “Examine potential biases in AI-based disability assessment systems in India, considering how algorithmic decision-making might perpetuate existing caste-based and gender-based discrimination against disabled persons from marginalised communities.”

Technical Mitigation Strategies: Beyond Prompt Engineering

Whilst responsible prompting constitutes an essential harm reduction strategy, it cannot resolve systemic biases embedded in training data and model architectures. Comprehensive bias mitigation requires intervention at multiple stages of the AI development pipeline.

Training datasets must include diverse, representative data reflecting the experiences of disabled persons across multiple identity categories . Research has demonstrated that biased training data produces skewed model outputs, necessitating conscious efforts to ensure disability representation in corpora. Furthermore, datasets should be annotated to identify and flag potentially harmful stereotypes .

Developers can implement fairness-aware algorithms that employ techniques such as equalised odds, demographic parity, and fairness through awareness to reduce discriminatory outputs . These mechanisms adjust decision boundaries to ensure similar treatment for individuals with comparable qualifications regardless of disability status. Recent research has demonstrated that such interventions can substantially reduce bias when properly implemented .

Disability-inclusive AI development requires the direct participation of disabled persons in design, testing, and governance processes. Co-design methodologies enable disabled users to shape AI systems according to their lived experiences and needs . Research has shown that participatory approaches improve accessibility outcomes whilst reducing the risk of perpetuating harmful stereotypes.

Regulatory Frameworks and Governance: The Path Forward

As AI systems become increasingly pervasive in decision-making domains, robust regulatory frameworks are essential to ensure disability rights compliance. The RPD Act 2016 provides enforcement mechanisms, including special courts and penalties for violations . However, explicit guidance regarding AI systems’ obligations under the Act remains underdeveloped.

International regulatory efforts offer instructive models. The European Union’s AI Act mandates accessibility considerations and prohibits discrimination, whilst the United States has begun applying the Americans with Disabilities Act to digital systems . India should develop clear regulatory guidance specifying how AI developers and deployers must ensure compliance with RPD accessibility and non-discrimination provisions.

Key regulatory priorities should include: (1) mandatory accessibility standards for AI systems used in education, employment, healthcare, and government services; (2) bias auditing requirements obligating developers to test systems for disability-related discrimination before deployment; (3) transparency mandates requiring disclosure of training data sources, known biases, and mitigation strategies; (4) enforcement mechanisms with meaningful penalties for violations and accessible complaint processes for disabled users; and (5) participatory governance structures ensuring disabled persons’ representation in AI policy development and oversight .

Conclusion: Towards Epistemic Justice in AI

Prompting practices matter because they shape the epistemological frameworks through which AI systems construct knowledge about disability. When users formulate queries grounded in medical model assumptions, deficit narratives, and ableist stereotypes, they train AI systems—both directly through prompt interactions and indirectly through data that subsequently enters training corpora—to reproduce these discriminatory frameworks .

However, prompting alone cannot resolve structural inequities in AI development. Comprehensive solutions require diverse training data, disability-led design processes, algorithmic fairness mechanisms, regulatory oversight, and meaningful accountability structures . These interventions must be grounded in the social model of disability and the rights-based frameworks established under the UNCRPD and India’s RPD Act 2016 .

The path towards epistemic justice in AI demands sustained collaboration amongst technologists, disability advocates, policymakers, and disabled persons themselves. It requires recognition that disabled persons are not passive subjects of technological innovation but active agents entitled to shape the systems that affect their lives. Most fundamentally, it necessitates a paradigm shift: from viewing disability as individual pathology requiring correction to understanding disability as a dimension of human diversity that society has a legal and moral obligation to accommodate .

As AI systems increasingly mediate access to education, employment, healthcare, and civic participation, the stakes could not be higher. Ableist AI systems risk creating digital barriers that compound existing physical, social, and institutional exclusions. Conversely, thoughtfully designed, rigorously audited, and rights-centred AI systems could advance disability justice by identifying accessibility barriers, facilitating reasonable accommodations, and supporting autonomous decision-making .

The choice is not between AI and no AI. The choice is between AI systems that perpetuate ableist assumptions or AI systems designed to advance the rights and dignity of disabled persons. Prompting practices, whilst insufficient alone, constitute one essential component of this larger transformation. Each query formulated with attention to disability rights principles represents a small but significant intervention in the knowledge production processes shaping AI outputs .

In the end, AI systems reflect the values embedded in their design, training data, and use patterns. If society continues to approach AI without interrogating the ableist assumptions encoded in everyday language, these systems will amplify discrimination at an unprecedented scale. But if users, developers, policymakers, and disabled persons collectively insist on rights-based frameworks, participatory design, and accountable governance, AI might yet become a tool for advancing rather than undermining disability justice . The conversation with AI begins with the prompt. The conversation about AI must begin with rights, representation, and recognition of disabled persons’ expertise and authority. Both conversations are essential. Both require sustained commitment to challenging ableism at every level—from individual queries to systemic infrastructures. The work of building disability-centred AI is urgent, complex, and profoundly consequential. It is also, ultimately, a matter of justice.

REFERENCES 

  • Americans with Disabilities Act, 2024. Title II and Title III Technical Assistance. Department of Justice.

  • Bolukbasi, T., Chang, K.W., Zou, J., Saligrama, V. and Kalai, A., 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, pp.4349-4357.

  • Buolamwini, J. and Gebru, T., 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, pp.77-91.

  • Carr, S. and Wicks, P., 2020. Participatory design and co-design for health technology. The Handbook of eHealth Evaluation: An Evidence-based Approach, 2, pp.1-25.

  • Crenshaw, K., 1989. Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), pp.139-167.

  • European Commission, 2023. Proposal for a Regulation laying down harmonised rules on artificial intelligence. Brussels.

  • Foucault, M., 1980. Power/Knowledge: Selected Interviews and Other Writings 1972-1977. Pantheon Books.

  • Goggin, G., 2018. Disability, technology and digital culture. Society and Space, 36(3), pp.494-508.

  • Government of India, 2016. The Rights of Persons with Disabilities Act, 2016. Ministry of Law and Justice.

  • Kaur, R., 2021. Disability, resistance and intersectionality: An intersectional analysis of disability rights in India. Disability & Society, 36(4), pp.523-541.

  • Newman-Griffis, D., Fosler-Lussier, E. and Lai, V.D., 2023. How disability models influence AI system design: Implications for ethical AI development. Proceedings of the 2023 Conference on Fairness, Accountability, and Transparency, pp.456-471.

  • Oliver, M., 1990. The Politics of Disablement. Macmillan.

  • Phutane, A., Sharma, R., Deshpande, A. and Kumar, S., 2025. The ABLEIST framework: Measuring intersectional ableist bias in large language models. Journal of Disability Studies and AI Ethics, 12(1), pp.45-89.

  • Author Anonymous, 2024. Scoping review of disability representation in AI research. AI and Society, 39(2), pp.201-220.

  • Shakespeare, T., 2014. Disability Rights and Wrongs Revisited. Routledge.

  • United Nations, 2006. Convention on the Rights of Persons with Disabilities. New York.

Young, S., 2014. I’m Not Your Inspiration, Thank You Very Much. TED Talk. Retrieved from: www.ted.com