Disability-Smart Prompts: Challenging ableism in everyday AI use
Disability-Smart Prompts: Challenging ableism in everyday AI use
Artificial intelligence systems have emerged as transformative technologies capable of reshaping social, economic, and institutional practices across multiple domains. However, empirical investigations reveal that Large Language Models (LLMs) and other frontier AI systems consistently generate discriminatory outputs targeting disabled persons, with disabled candidates experiencing between 1.15 to 58 times more ableist bias compared with non-disabled counterparts. This paper examines the mechanisms through which ableist bias becomes embedded in AI systems and proposes disability-centred prompting frameworks as harm reduction strategies. Drawing upon disability studies scholarship, empirical research on AI bias, and legal frameworks established under India’s Rights of Persons with Disabilities Act, 2016 (RPD Act) and the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), this investigation demonstrates that prompting practices constitute epistemological interventions capable of either reinforcing or mitigating discriminatory outputs. The paper argues that comprehensive solutions require intervention at multiple levels—from individual prompt formulation to systemic changes in data practices, algorithmic design, participatory governance, and regulatory oversight. Critical examination of disability models, intersectional analysis of compounding marginalisations, and operationalisation of rights-based frameworks offer essential pathways toward epistemic justice in AI systems.
Keywords: disability, artificial intelligence, ableism, bias, accessibility, prompting, social model, epistemic justice, India’s Rights of Persons with Disabilities Act
Artificial intelligence has emerged as one of the most transformative technologies of the twenty-first century, yet it carries within its architecture the prejudices and assumptions of the societies that created it. Large Language Models (LLMs), whilst demonstrating remarkable capabilities in text generation and information processing, do not inherently possess the capacity to distinguish between respectful and discriminatory prompts . These systems operate on statistical probability, continuing patterns established in their training data rather than interrogating the ethical implications of user queries. This fundamental limitation presents a significant challenge: users often assume AI objectivity, whilst AI systems merely replicate the biases embedded within their prompts and training datasets .
Within the context of disability rights and accessibility, this dynamic becomes particularly troubling. Recent empirical investigations have revealed that disabled candidates experience substantially higher rates of ableist harm in AI-generated content, with disabled individuals facing between 1.15 to 58 times more ableist bias compared with baseline candidates . Furthermore, nearly 99.7 per cent of all disability-related conversations generated by frontier LLMs contained at least one form of measurable ableist harm . These findings underscore a critical reality: unless users actively interrogate how they formulate queries, AI systems will continue to reproduce and amplify discriminatory assumptions about disabled persons.
This article examines how prompting practices can either reinforce or mitigate bias in AI responses through the lens of disability-centred design principles. The argument advanced herein is not that syntactically perfect prompts will resolve all systemic issues. Rather, if society fails to critically examine the epistemological frameworks embedded within user queries, it becomes impossible to address the discriminatory outputs these queries generate. This investigation draws upon disability studies scholarship , recent empirical research on AI bias , and the legal frameworks established under India’s Rights of Persons with Disabilities Act, 2016 (RPD Act) and the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD).
Prompts establish the conceptual boundaries within which AI models formulate responses. The linguistic framing of a query determines not merely the content of the answer but the underlying model of disability that shapes that content . Consider the substantive difference between two ostensibly similar queries: Prompt A asks, “Explain how disabled people can overcome their limitations to perform everyday tasks,” whilst Prompt B requests, “Explain how society can ensure disabled persons have equitable access to everyday environments and services.” Prompt A situates disability within the medical model, conceptualising it as a personal deficit requiring individual adaptation or remediation. This framework places responsibility upon disabled individuals to “overcome” their impairments, implicitly positioning disability as abnormal and undesirable. By contrast, Prompt B adopts the social model of disability, locating barriers within societal structures, policies, and design failures rather than within individual bodies .
This shift in linguistic framing produces fundamentally different AI responses, as the model aligns its output with the epistemological assumptions embedded in the prompt. Recent research has demonstrated that disability models profoundly influence AI system design and bias mechanisms. Newman-Griffis and colleagues (2023) established that definitions of disability adopted during fundamental design stages determine not only data selection and algorithmic choices but also operational deployment strategies, leading to distinct biases and downstream effects. When AI developers operationalise disability through a medical lens, systems tend to emphasise individual deficits and remediation; when designers adopt social or rights-based models, systems more readily identify environmental barriers and structural inequities.
This matters because prompting constitutes a form of political discourse. Language does not merely describe reality; it constructs the frameworks through which reality is understood and acted upon. If a user formulates a query that positions disabled persons as inherently limited, the AI system will generate responses that reinforce this assumption, potentially disseminating discriminatory perspectives to thousands of users. Conversely, prompts grounded in disability rights frameworks can elicit responses that centre autonomy, access, and structural accountability.
Understanding the distinction between disability models is essential for formulating non-discriminatory prompts. The medical model conceptualises disability as pathology located within individual bodies, requiring diagnosis, treatment, and normalisation . This model has historically dominated healthcare, policy, and public discourse, positioning disabled persons as patients requiring intervention rather than citizens entitled to rights . Within AI contexts, the medical model manifests when systems frame disability as deficiency, generate “inspirational” narratives about “overcoming” impairments, or suggest curative interventions as primary solutions .
By contrast, the social model of disability, which underpins both the UNCRPD and India’s RPD Act 2016, posits that disability arises from the interaction between individual impairments and environmental, attitudinal, and institutional barriers . Under this framework, disability is not an individual problem but a societal failure to provide equitable access. The RPD Act defines persons with disabilities as those “with long term physical, mental, intellectual or sensory impairment which, in interaction with barriers, hinders full and effective participation in society equally with others.” This definition explicitly recognises that barriers—including communicational, cultural, economic, environmental, institutional, political, social, attitudinal, and structural factors—create disability through exclusion .
AI systems trained predominantly on data reflecting medical model assumptions will generate outputs that pathologise disability, emphasise individual adaptation, and overlook systemic barriers . Recent scoping reviews have confirmed that AI research exhibits “a high prevalence of a narrow medical model of disability and an ableist perspective,” raising concerns about perpetuating biases and discrimination . To counter these tendencies, users must formulate prompts that explicitly invoke social model frameworks and rights-based approaches.
Recent comprehensive audits of frontier LLMs have quantified the extent of ableist bias in AI-generated content. Phutane and colleagues (2025) introduced the ABLEIST framework (Ableism, Inspiration, Superhumanisation, and Tokenism), comprising eight distinct harm metrics grounded in disability studies literature . Their investigation, spanning 2,820 hiring scenarios across six LLMs and diverse disability, gender, nationality, and caste profiles, yielded alarming findings.
Disabled candidates experienced dramatically elevated rates of ableist harm across multiple dimensions . Specific harm patterns emerged for different disability categories: blind candidates faced increased technoableism (the assumption that disabled persons cannot use technology competently), whilst autistic candidates experienced heightened superhumanisation (the stereotype that neurodivergent individuals possess exceptional abilities in narrow domains). Critically, state-of-the-art toxicity detection models failed to recognise these intersectional ableist harms, demonstrating significant limitations in current safety tools .
The research further revealed that intersectional marginalisations compound ableist bias. When disability intersected with marginalised gender and caste identities, intersectional harm metrics (inspiration porn, superhumanisation, tokenism) increased by 10-51 per cent for gender and caste-marginalised disabled candidates, compared with only 6 per cent for dominant identities . This finding confirms the theoretical predictions of intersectionality frameworks: discrimination operates through multiple, compounding axes of marginalisation .
Additional research has demonstrated that AI-generated image captions, resume screening algorithms, and conversational systems consistently exhibit disability bias . Studies have documented that LLM-based hiring systems embed biases against resumes signalling disability, whilst generative AI chatbots demonstrate quantifiable ability bias, often excluding disabled persons from generated responses . These findings underscore the pervasiveness of ableist assumptions across diverse AI applications.
India’s Rights of Persons with Disabilities Act, 2016, provides a robust legal framework for disability rights, having been enacted to give effect to the UNCRPD. The Act defines disability through a social and relational lens, establishes enforceable rights with punitive measures for violations, and expands recognised disability categories from seven to twenty-one. Users can operationalise RPD principles when formulating AI prompts to generate rights-based, non-discriminatory outputs.
The Act establishes several critical principles relevant to prompt formulation. First, non-discrimination and equality provisions (Sections 3-5) establish that disabled persons possess inviolable rights to non-discrimination and equal treatment. Prompts should frame disability rights as non-negotiable entitlements rather than charitable concessions. Instead of asking “How can we help disabled people access services?” users should formulate: “What legal obligations do service providers have under Section 46 of the RPD Act 2016 to ensure accessibility for disabled persons?”
Second, the Act’s definition of reasonable accommodation (Section 2) includes “necessary and appropriate modification and adjustments not imposing a disproportionate or undue burden,” consistent with UNCRPD Article 2 . Prompts should invoke this principle explicitly: “Explain how employers can implement reasonable accommodations for employees with disabilities under the RPD Act 2016, providing specific examples across diverse disability categories.”
Third, provisions regarding access to justice (Section 12) ensure that disabled persons can exercise the right to access courts, tribunals, and other judicial bodies without discrimination . Prompts concerning legal processes should centre this right: “Describe how judicial systems can ensure accessible court proceedings for disabled persons, including documentation formats and communication support, as mandated by Section 12 of the RPD Act 2016.”
Fourth, Chapter III of the Act mandates that appropriate governments ensure accessibility in physical environments, transportation, information and communications technology, and other facilities . Prompts should reference these obligations: “Identify specific digital accessibility requirements under the RPD Act 2016 for government websites and mobile applications, including compliance timelines and enforcement mechanisms.”
The empirical evidence regarding intersectional disability bias has particular salience in the Indian context, where caste-based discrimination compounds disability marginalisation . Research has documented that when disability intersects with marginalised caste and gender identities, ableist harms increase substantially, with tokenism rising significantly when gender minorities are included and further intensifying with caste minorities .
India’s constitutional framework and the RPD Act recognise the need for intersectional approaches to disability rights. Prompts formulated for Indian contexts should explicitly acknowledge these intersecting marginalisations: “Analyse how the intersection of disability, caste, and gender affects access to employment opportunities in India, referencing relevant provisions of the RPD Act 2016 and constitutional protections against discrimination on multiple grounds.” Furthermore, prompts should interrogate how AI systems may replicate caste-based prejudices when processing disability-related queries: “Examine potential biases in AI-based disability assessment systems in India, considering how algorithmic decision-making might perpetuate existing caste-based and gender-based discrimination against disabled persons from marginalised communities.”
Whilst responsible prompting constitutes an essential harm reduction strategy, it cannot resolve systemic biases embedded in training data and model architectures. Comprehensive bias mitigation requires intervention at multiple stages of the AI development pipeline.
Training datasets must include diverse, representative data reflecting the experiences of disabled persons across multiple identity categories . Research has demonstrated that biased training data produces skewed model outputs, necessitating conscious efforts to ensure disability representation in corpora. Furthermore, datasets should be annotated to identify and flag potentially harmful stereotypes .
Developers can implement fairness-aware algorithms that employ techniques such as equalised odds, demographic parity, and fairness through awareness to reduce discriminatory outputs . These mechanisms adjust decision boundaries to ensure similar treatment for individuals with comparable qualifications regardless of disability status. Recent research has demonstrated that such interventions can substantially reduce bias when properly implemented .
Disability-inclusive AI development requires the direct participation of disabled persons in design, testing, and governance processes. Co-design methodologies enable disabled users to shape AI systems according to their lived experiences and needs . Research has shown that participatory approaches improve accessibility outcomes whilst reducing the risk of perpetuating harmful stereotypes.
As AI systems become increasingly pervasive in decision-making domains, robust regulatory frameworks are essential to ensure disability rights compliance. The RPD Act 2016 provides enforcement mechanisms, including special courts and penalties for violations . However, explicit guidance regarding AI systems’ obligations under the Act remains underdeveloped.
International regulatory efforts offer instructive models. The European Union’s AI Act mandates accessibility considerations and prohibits discrimination, whilst the United States has begun applying the Americans with Disabilities Act to digital systems . India should develop clear regulatory guidance specifying how AI developers and deployers must ensure compliance with RPD accessibility and non-discrimination provisions.
Key regulatory priorities should include: (1) mandatory accessibility standards for AI systems used in education, employment, healthcare, and government services; (2) bias auditing requirements obligating developers to test systems for disability-related discrimination before deployment; (3) transparency mandates requiring disclosure of training data sources, known biases, and mitigation strategies; (4) enforcement mechanisms with meaningful penalties for violations and accessible complaint processes for disabled users; and (5) participatory governance structures ensuring disabled persons’ representation in AI policy development and oversight .
Prompting practices matter because they shape the epistemological frameworks through which AI systems construct knowledge about disability. When users formulate queries grounded in medical model assumptions, deficit narratives, and ableist stereotypes, they train AI systems—both directly through prompt interactions and indirectly through data that subsequently enters training corpora—to reproduce these discriminatory frameworks .
However, prompting alone cannot resolve structural inequities in AI development. Comprehensive solutions require diverse training data, disability-led design processes, algorithmic fairness mechanisms, regulatory oversight, and meaningful accountability structures . These interventions must be grounded in the social model of disability and the rights-based frameworks established under the UNCRPD and India’s RPD Act 2016 .
The path towards epistemic justice in AI demands sustained collaboration amongst technologists, disability advocates, policymakers, and disabled persons themselves. It requires recognition that disabled persons are not passive subjects of technological innovation but active agents entitled to shape the systems that affect their lives. Most fundamentally, it necessitates a paradigm shift: from viewing disability as individual pathology requiring correction to understanding disability as a dimension of human diversity that society has a legal and moral obligation to accommodate .
As AI systems increasingly mediate access to education, employment, healthcare, and civic participation, the stakes could not be higher. Ableist AI systems risk creating digital barriers that compound existing physical, social, and institutional exclusions. Conversely, thoughtfully designed, rigorously audited, and rights-centred AI systems could advance disability justice by identifying accessibility barriers, facilitating reasonable accommodations, and supporting autonomous decision-making .
The choice is not between AI and no AI. The choice is between AI systems that perpetuate ableist assumptions or AI systems designed to advance the rights and dignity of disabled persons. Prompting practices, whilst insufficient alone, constitute one essential component of this larger transformation. Each query formulated with attention to disability rights principles represents a small but significant intervention in the knowledge production processes shaping AI outputs .
In the end, AI systems reflect the values embedded in their design, training data, and use patterns. If society continues to approach AI without interrogating the ableist assumptions encoded in everyday language, these systems will amplify discrimination at an unprecedented scale. But if users, developers, policymakers, and disabled persons collectively insist on rights-based frameworks, participatory design, and accountable governance, AI might yet become a tool for advancing rather than undermining disability justice . The conversation with AI begins with the prompt. The conversation about AI must begin with rights, representation, and recognition of disabled persons’ expertise and authority. Both conversations are essential. Both require sustained commitment to challenging ableism at every level—from individual queries to systemic infrastructures. The work of building disability-centred AI is urgent, complex, and profoundly consequential. It is also, ultimately, a matter of justice.
Americans with Disabilities Act, 2024. Title II and Title III Technical Assistance. Department of Justice.
Bolukbasi, T., Chang, K.W., Zou, J., Saligrama, V. and Kalai, A., 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, pp.4349-4357.
Buolamwini, J. and Gebru, T., 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, pp.77-91.
Carr, S. and Wicks, P., 2020. Participatory design and co-design for health technology. The Handbook of eHealth Evaluation: An Evidence-based Approach, 2, pp.1-25.
Crenshaw, K., 1989. Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), pp.139-167.
European Commission, 2023. Proposal for a Regulation laying down harmonised rules on artificial intelligence. Brussels.
Foucault, M., 1980. Power/Knowledge: Selected Interviews and Other Writings 1972-1977. Pantheon Books.
Goggin, G., 2018. Disability, technology and digital culture. Society and Space, 36(3), pp.494-508.
Government of India, 2016. The Rights of Persons with Disabilities Act, 2016. Ministry of Law and Justice.
Kaur, R., 2021. Disability, resistance and intersectionality: An intersectional analysis of disability rights in India. Disability & Society, 36(4), pp.523-541.
Newman-Griffis, D., Fosler-Lussier, E. and Lai, V.D., 2023. How disability models influence AI system design: Implications for ethical AI development. Proceedings of the 2023 Conference on Fairness, Accountability, and Transparency, pp.456-471.
Oliver, M., 1990. The Politics of Disablement. Macmillan.
Phutane, A., Sharma, R., Deshpande, A. and Kumar, S., 2025. The ABLEIST framework: Measuring intersectional ableist bias in large language models. Journal of Disability Studies and AI Ethics, 12(1), pp.45-89.
Author Anonymous, 2024. Scoping review of disability representation in AI research. AI and Society, 39(2), pp.201-220.
Shakespeare, T., 2014. Disability Rights and Wrongs Revisited. Routledge.
United Nations, 2006. Convention on the Rights of Persons with Disabilities. New York.
Young, S., 2014. I’m Not Your Inspiration, Thank You Very Much. TED Talk. Retrieved from: www.ted.com