Available for invited lectures, workshops and policy dialogues on accessibility, design and governance.
Artificial Intelligence is heralded as a harbinger of inclusion, yet, in practice, it frequently renders disability invisible through the subtle operations of technoableism. The Bias Pipeline interrogates how data, design, and policy infrastructures—governed by technoableist logic—determine who becomes legible to machines and who is systematically excluded. This platform extends the dialogue begun in “Prototype — Accessible to Whom? Legible to What?”, assembling essays, analyses, and resources on AI, accessibility, and the critical role of disability-led design.
The site's objectives are firmly anchored in the following:
To reveal how bias, rooted in technoableist assumptions, permeates every stage from data to decision.
To champion accessible technology and participatory, disability-led design.
To urge policymakers—including NITI Aayog—to align India’s AI initiatives with the UNCRPD, the RPwD Act 2016, and the Supreme Court’s Rajiv Raturi judgment.
Here, accessibility is no secondary concern; it constitutes the foundation of our architectural ethos.
Only when disabled expertise leads shall AI truly learn to listen.
Technoableism and the Bias Pipeline: How Ableist Ideology Becomes Algorithmic Exclusion
Disability-Smart Prompts: Challenging ableism in everyday AI use
Human in the Loop: Artificial Intelligence, Disability, and Hidden Ableism
How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know