This article examines the intersections of Artificial Intelligence, accessibility, and disability-led design, asking who becomes included when machines define what counts as valid human expression. It argues that prototypes often prioritise machinic legibility over genuine access, reinforcing bias and tokenistic participation. By centring disabled knowledge and experience in AI development, the piece calls for systems that adjust to human diversity rather than require conformity to data-driven norms.
This article explores how user prompts shape AI responses, especially in matters of disability, accessibility, and inclusion. It argues that bias often enters through the way questions are framed, revealing social assumptions about disabled persons. Through examples of good and bad prompting, it highlights disability etiquette, rights-based language, and accountability. It encourages users to treat prompting as a powerful tool to challenge ableism rather than reinforce it.
India’s AI policy cannot afford to treat disability inclusion as an afterthought. In light of the Rajive Raturi judgment and NALSAR’s Finding Sizes for All report, we must move from voluntary guidelines to mandatory standards. Drawing lessons from the EU AI Act, this article argues for disability-inclusive, rights-based AI governance that centres accessibility, universal design and accountability.