Available for invited lectures, workshops and policy dialogues on accessibility, design and governance.
Artificial Intelligence promises inclusion but frequently delivers exclusion. This article examines how technoableism—the ideology that frames disability as a technological problem rather than a matter of civil rights—becomes embedded in AI systems through the data pipeline. From voice recognition failures that deny agency to persons with speech disabilities, to hiring algorithms that systematically discriminate, to large language models that reproduce cultural stereotypes, algorithmic bias is not technical error but ideological infrastructure. Drawing upon disability studies scholarship, empirical research, and India's constitutional frameworks—particularly the Rajiv Raturi Supreme Court judgment establishing accessibility as a fundamental right—this investigation maps how ableist assumptions translate into automated exclusion. The path forward demands structural transformation: disability-led participatory design, mandatory accessibility standards, equity-based evaluation frameworks, and genuine alignment between India's AI strategy and the UNCRPD, RPwD Act 2016, and judicial precedent. When disability leads, AI learns to listen.
This article examines the intersections of Artificial Intelligence, accessibility, and disability-led design, asking who becomes included when machines define what counts as valid human expression. It argues that prototypes often prioritise machinic legibility over genuine access, reinforcing bias and tokenistic participation. By centring disabled knowledge and experience in AI development, the piece calls for systems that adjust to human diversity rather than require conformity to data-driven norms.
This article explores how user prompts shape AI responses, especially in matters of disability, accessibility, and inclusion. It argues that bias often enters through the way questions are framed, revealing social assumptions about disabled persons. Through examples of good and bad prompting, it highlights disability etiquette, rights-based language, and accountability. It encourages users to treat prompting as a powerful tool to challenge ableism rather than reinforce it.
India’s AI policy cannot afford to treat disability inclusion as an afterthought. In light of the Rajive Raturi judgment and NALSAR’s Finding Sizes for All report, we must move from voluntary guidelines to mandatory standards. Drawing lessons from the EU AI Act, this article argues for disability-inclusive, rights-based AI governance that centres accessibility, universal design and accountability.
This whitepaper offers a rigorous legal and technical critique of MeitY’s India AI Governance Guidelines through the lens of disability rights. Building on the open letter and the Supreme Court’s Rajive Raturi ruling, it exposes how the guidelines fail to address AI bias, accessibility, and enforceable inclusion. Drawing on the RPwD Act, UNCRPD, and the NALSAR “Finding Sizes for All” report, it outlines actionable reforms to embed accessibility and anti-discrimination into India’s AI policy framework.
Algorithmic bias quietly shapes accessibility, often to the detriment of disabled people. AI systems that appear neutral may misrecognise disabled bodies and communication styles, exclude assistive-technology users, or automate decisions that replicate historic discrimination. Under the UNCRPD and India’s RPwD Act, accessibility is a right; the EU AI Act shows how regulation can target high-risk systems and require safeguards. Disabled persons and their organisations ought to insist on inclusive data, accessible design, transparency and human oversight. Policymakers must mandate audits and redress; developers shall embed accessibility by design. With rights-based governance and meaningful participation, AI can enhance rather than erode accessibility. Disabled voices must lead policy and practice.
While discussions of AI's impact on women's economic participation focus on time poverty and upskilling gaps, they often overlook a critical intersection: women with disabilities face a "triple burden" of paid work, unpaid care, and the invisible labor of navigating ableist systems. This rejoinder argues that AI systems—from hiring algorithms that penalise employment gaps to speech recognition that fails dysarthric voices—don't merely reflect existing inequalities but actively compound them. True inclusion requires disability-disaggregated data collection, accessibility designed from inception rather than retrofitted, and mandatory audits for intersectional algorithmic bias. Until India's development vision accounts for citizens who move, speak, and think differently, technological progress risks deepening rather than bridging existing divides.