Translate

Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Thursday, 26 February 2026

AI for All or Exclusion by Default? Open Letter to PM Narendra Modi on Disability Bias in Artificial Intelligence, Accessibility Challenges, and Lessons from India AI Impact Summit 2026 – Addressing TechnoAbleism in India's AI Policy and Governance

Date: 26/02/2026

To,

Shri Narendra Modi Ji
Hon’ble Prime Minister of India
South Block, New Delhi

Subject: On Artificial Intelligence, Disability Bias, and the Meaning of “AI for All

Hon’ble Prime Minister,

Namaskar.

I write to you not as a technologist, nor as a lawyer by formal training. I write as a citizen who lives with disability, and as someone who has had to understand both law and technology simply in order to participate in ordinary life. Much of what I know about systems has not been learnt in classrooms. It has been learnt at doorways without ramps, on websites without structure, and in digital forms that could not be completed without assistance.

Exclusion rarely announces itself. It is usually designed quietly.

At the India AI Impact Summit 2026, when your address was translated in real time through an AI-powered sign language avatar, I watched carefully. It was an impressive demonstration, certainly. But it was also something more subtle. For a brief moment, Access was visible. It was not an afterthought. It stood alongside innovation, not behind it. That visibility matters. It signals direction.

I would also like to draw to the attention of Shri Narendra Modi that Moneylife published an article entitled “TechnoAbleism in India’s AI Moment: Why Accessibility Is Not Enough” [click here to read article] on 17 February 2026, coinciding with the India AI Impact Summit’s session on disability. The piece shows that this issue is already the subject of public discussion and media scrutiny, which underlines the urgency of treating accessibility and disability bias as central elements of India’s AI programme.

Yet direction must be followed by design.

We speak today of “AI for All.” It is a powerful phrase. But if it is to carry meaning, it must confront a difficult truth: artificial intelligence systems, as they are presently trained and deployed across the world, tend to absorb and reproduce the biases already present in society. Disability is not excluded intentionally. It is excluded structurally.

Artificial intelligence learns from data. That data is drawn from the world as it has been recorded. The recorded world, especially the digital one, reflects certain assumptions about how a person moves, speaks, types, sees, processes information, and builds a career. The so-called average user becomes the reference point. Systems are optimised around that reference point. Others are accommodated only when someone remembers to ask.

In such systems, disability becomes an exception.

This becomes visible in small but telling ways. When generative AI tools are asked to create websites or applications, they often produce code that assumes mouse navigation, adequate vision, and conventional interaction patterns. Keyboard accessibility may not be complete. Structural markup for screen readers may be missing. Alternative text may not be generated unless explicitly requested. Colour contrast frequently fails established accessibility norms.

Unless instructed, accessibility does not appear by default.

That word, default, is where the real issue lies.

Under the Rights of Persons with Disabilities Act, 2016, and under India’s obligations pursuant to the United Nations Convention on the Rights of Persons with Disabilities, accessibility is not optional. It is not decorative. It is a matter of Equality and Dignity. The Hon’ble Supreme Court has affirmed that accessibility is foundational to the exercise of fundamental rights. Without access, rights remain theoretical.

When artificial intelligence begins to generate systems at scale, inaccessible design also begins to scale. What was once a single inaccessible website becomes hundreds. What was once a human oversight becomes an automated pattern. Exclusion is no longer episodic. It is multiplied.

A citizen need not be denied formally. She may simply be unable to use what has been built.

India has articulated an ambitious artificial intelligence architecture, extending from infrastructure and compute to foundational models and applications. The vision is large. The confidence is visible. But I worry about timing. If disability is considered only at the application stage, after the underlying models have already been trained on datasets that insufficiently represent disability experience, then correction later will be partial and costly.

Bias does not remain soft once embedded. It settles into systems.

We have seen, in other technological domains, a familiar cycle. Innovation is celebrated. Adoption expands rapidly. Harm becomes visible only after scale has been achieved. Regulation then attempts to repair what might have been prevented. Artificial intelligence operates at a velocity and magnitude that make delayed correction far more difficult.

The Book of Proverbs says, “Where there is no vision, the people perish.” I do not read that verse as theological warning. I read it as policy advice. Vision must mean foresight; asking who is not being seen.

Around the world, governments have begun to grapple with these questions. The European Union has enacted an Artificial Intelligence Act that links AI governance explicitly to fundamental rights and non-discrimination. High-risk systems are subject to structured assessment and documentation. Bias audits and impact assessments are becoming part of regulatory vocabulary in several jurisdictions. The conversation is no longer limited to efficiency. It includes fairness.

India, as a State Party to the UN Convention on the Rights of Persons with Disabilities, is already bound by obligations to ensure equal access to information and communication technologies. These commitments do not diminish because technology evolves. If anything, their relevance increases.

This is not an argument for importing foreign law. It is an argument for aligning our technological progress with our own constitutional morality.

There is another dimension that requires attention, and it cannot be resolved by rhetoric alone. We need structured, publicly supported research on disability bias in artificial intelligence systems. Not assumption. Not symbolic inclusion. Research.

Datasets must be examined for representational gaps. Model outputs must be tested systematically across disability-related contexts. Evaluation metrics must measure performance across diverse sensory and cognitive realities. Without such empirical work, we shall continue to debate in abstraction.

Artificial intelligence is not only engineering. It touches law, sociology, governance, ethics, and lived experience. Universities such as NALSAR and other institutions working at the intersection of law and public policy ought to collaborate with technical institutes developing AI systems. Organisations grounded in disability rights must be involved as knowledge partners, not merely consulted at the end.

Public funding is being directed towards compute capacity, innovation ecosystems, and model development. A focused allocation for research on AI and disability bias would not be disproportionate. 

Yet its impact would be long-term and structural.

The Government of India ought undertake such a structured research initiative on artificial intelligence and disability bias, I would respectfully seek to be involved in that effort. For several years, I have been examining this question in depth and have maintained a dedicated platform, thebiaspipeline.nileshingit.org [click here to visit site], where I have written extensively on disability bias in digital systems and AI. While many organisations in India are rightly focused on accessibility compliance, very few are examining algorithmic bias itself as a systemic concern. I believe my sustained work in this area positions me to contribute meaningfully to any national research initiative. Significant public resources are presently being invested in artificial intelligence. If disability bias is not studied with equal seriousness, an important dimension of inclusion risks being overlooked. The promise of “Sabka Saath, Sabka Vikas” cannot be realised if persons with disabilities are not structurally included in the design and evaluation of emerging technologies.

Over the past year, I wrote to the Ministry of Electronics and Information Technology and to NITI Aayog when national AI policy discussions were underway. My intention was simple: to place before them the structural concerns surrounding disability bias in AI systems. I have not received substantive responses. I mention this not as complaint, but as indication that this dimension has not yet been treated with the seriousness it deserves.

The phrase “human in the loop” is often used in AI governance. It is a reassuring phrase. Machines, we are told, shall not decide alone. But one must ask quietly: whose humanity is present in that loop?

As Shakespeare wrote, “What is the city but the people?” If oversight committees and review boards do not include disability expertise, certain harms will remain invisible. Representation in governance is not ceremonial. It is epistemic.

India stands at a formative moment. Our AI ecosystem is still being shaped. The choices being made now will determine whether exclusion is prevented or automated. If accessibility standards are embedded by default in publicly funded AI systems; if Disability Impact Assessments become routine for high-stakes deployments; if datasets are audited honestly; if disability expertise is included in national AI councils and technical bodies; then India may demonstrate that technological leadership and social Justice are not adversaries.

They may strengthen one another.

If accessibility remains secondary, we shall eventually attempt repair. Repair is always more expensive than foresight.

Hon’ble Prime Minister, artificial intelligence may indeed represent a civilisational opportunity. It is also a moral test. Let Access be built into foundations, not attached later. Let Inclusion be structural, not symbolic. Let Equality be measurable in code, not only declared in speech.

I place these reflections before you with respect and with hope.


Jai Hind. 


Yours sincerely,

Nilesh Singit

Tuesday, 17 February 2026

TechnoAbleism in India’s AI Moment: Why Accessibility Is Not Enough

A vibrant abstract illustration showing people with disabilities interacting with digital systems, surrounded by AI symbols, datasets, and decision interfaces, highlighting tensions between accessibility and algorithmic bias.
When artificial intelligence is built on narrow assumptions of the “normal” user, accessibility features alone cannot prevent exclusion embedded within the algorithm itself.

India’s present moment in artificial intelligence is often described in terms of innovation, opportunity, and national technological leadership. The India AI Impact Summit brings global attention to how artificial intelligence is shaping governance, development, and social transformation. 

Within these discussions, disability is increasingly visible through conversations on accessibility, assistive technologies, and digital inclusion. This attention is important. For many years, disability was largely absent from technology policy debates. Yet, a deeper issue remains insufficiently examined: accessibility alone does not ensure inclusion when artificial intelligence systems themselves are shaped by structural bias.

Accessibility and bias are frequently treated as interchangeable ideas. They are not the same. Accessibility determines whether a person with disability can use a system. Bias determines whether the system was designed with that person in mind at all. When systems are built around assumptions about a so-called normal user, accessible interfaces merely allow disabled persons to enter environments that continue to exclude them through their internal logic. The interface may be open; the opportunity may still be closed.

This structural problem becomes visible in the rapidly expanding practice often called ‘vibe coding’, where developers use generative AI tools to create websites and software through simple prompts. When an AI coding assistant is asked to generate a webpage, the default output usually prioritises visual layouts, mouse-dependent navigation, and animation-heavy design. Accessibility features such as semantic structure, keyboard navigation, or screen-reader compatibility rarely appear unless they are explicitly demanded. The system has learned that the ‘default’ user is non-disabled because that assumption dominates the data from which it learned. As these outputs are reproduced across applications and services, exclusion becomes quietly automated.

Bias also appears in the decision-making systems that increasingly shape employment, education, financial access and public services. Hiring systems that analyse speech, expression, or behavioural patterns may interpret disability-related communication styles as indicators of low confidence or low performance. Speech recognition tools often struggle with atypical speech patterns. Vision systems may fail to recognise assistive devices correctly. These outcomes are not isolated technical errors. They arise because disability is often missing from training datasets, testing environments and design teams. When disability is absent from the design stage, the system internalises non-disabled behaviour as the baseline expectation.

Another less visible dimension of bias emerges from the way artificial intelligence systems classify behaviour. Many systems are trained to recognise patterns associated with what developers consider efficient, confident or normal interaction. When human diversity falls outside those patterns, the system may interpret difference as error. Research in AI ethics repeatedly shows that classification models tend to perform poorly when training datasets do not adequately represent disabled users, leading to systematic misinterpretation of speech, movement or communication styles. 

These classification failures are rarely dramatic; they appear as small inaccuracies that accumulate over time. A speech interface that repeatedly fails to understand a user, an automated assessment tool that consistently undervalues atypical communication, or a recognition system that misidentifies assistive devices can gradually shape unequal access to opportunities. As these outcomes arise from technical assumptions rather than explicit discrimination, they often remain invisible in public debates, even as their effects are widely experienced.

These patterns together reflect what disability scholars describe as techno-ableism - the tendency of technological systems to appear empowering while quietly reinforcing assumptions that favour non-disabled ways of functioning. Technologies may expand participation on the surface, yet the intelligence embedded within them continues to treat disability as deviation rather than diversity. A person with disability may be able to access the interface, log into the system or navigate the platform, yet still face exclusion through hiring algorithms, recognition systems, or automated decision tools that were never designed around diverse bodies and minds. The experience is not exclusion from technology, but exclusion within technology itself.

Public discussions frequently present disability mainly through assistive innovation: tools that help blind users read text, applications that assist persons with mobility impairments or systems designed for specific accessibility functions. These innovations are valuable and necessary. However, when disability appears only in assistive contexts, it is positioned as a specialised technological niche rather than a structural dimension of all artificial intelligence systems. The mainstream design pipeline continues to assume the non-disabled user as the default, while disability inclusion becomes an add-on layer introduced later.

India currently stands at a formative stage in shaping its artificial intelligence ecosystem. As public digital infrastructure, governance platforms and automated service systems expand, the assumptions embedded in present design choices will influence social participation for decades. If accessibility becomes the only measure of inclusion, structural bias risks becoming embedded within the foundations of emerging technological systems. Inclusion then becomes symbolic rather than substantive: systems appear inclusive because they are accessible, yet continue to produce unequal outcomes.

From the standpoint of persons with disabilities, this distinction is deeply personal. Accessibility determines whether we can interact with the system. Bias determines whether the system recognises us as equal participants once we enter. Accessible platforms built upon biased intelligence do not remove barriers; they simply move the barrier from the interface to the algorithm.

As a disability rights practitioner working at the intersection of law, accessibility, and technology, I view the present expansion of AI discussions with cautious attention. Disability is finally visible in national technology conversations, yet the focus remains concentrated on accessibility demonstrations rather than the deeper question of structural bias. Artificial intelligence will increasingly shape employment, governance, education and everyday social participation. Whether these systems expand equality or quietly reproduce exclusion will depend not only on whether they are accessible, but also on whose experiences shape the data, assumptions, and decision rules within them.

Accessibility opens the door; fairness determines what happens after entry. Without confronting bias directly, technological progress risks creating a future that is digitally reachable yet socially unequal for many persons with disabilities. Many of the issues discussed here, including the structural relationship between accessibility and algorithmic bias, are explored in greater detail at The Bias Pipeline (https://thebiaspipeline.nileshsingit.org), where readers may engage with further analysis.

References

  • India AI Impact Summit official information portal, Government of India.
  • Coverage of summit accessibility and inclusion themes, Business Standard and related reporting.
  • United Nations and global policy discussions on AI and disability inclusion.
  • Nilesh Singit, The Bias Pipeline https://thebiaspipeline.nileshsingit.org/

(Nilesh Singit is a disability rights practitioner and accessibility strategist working at the intersection of law, governance, and AI inclusion. A Distinguished Research Fellow at the Centre for Disability Studies, NALSAR University of Law, he writes on accessibility, techno-ableism, and algorithmic bias at www.nileshsingit.org)



Moneylife.in
Published 17th Fevruary 202

MonelifeLlogo
MoneyLife.in


Friday, 26 December 2025

Prototype — Accessible to Whom? Legible to What?

 

Abstract

Artificial Intelligence (AI) has transformed the terrain of possibility for assistive technology and inclusive design, but continues to perpetuate complex forms of exclusion rooted in legibility, bias, and tokenism. This paper critiques current paradigms of AI prototyping that centre “legibility to machines” over accessibility for disabled persons, arguing for a radical disability-led approach. Drawing on international law, empirical studies, and design scholarship, the analysis demonstrates why prototyping is neither neutral nor technical, but a deeply social and political process. Building from case studies in recruiting, education, and healthcare technology failures, this work exposes structural biases in training, design, and implementation—challenging designers and policymakers to move from “designing for” and “designing with” to “designing from” disability and difference.

Introduction

Prototyping is celebrated in engineering and design as a space for creativity, optimism, and risk-taking—a laboratory for the future. Yet, for countless disabled persons, the prototype is also where inclusion begins… or ends. For them, optimism is often tempered by the unspoken reality that exclusion most often arrives early and quietly, disguised as technical “constraints,” market “priorities,” or supposedly “objective” code. When prototyping occurs, it rarely asks: accessible to whom, legible to what?

This question—so simple, so foundational—is what this paper interrogates. The rise of Artificial Intelligence has intensified the stakes because AI prototypes increasingly determine who is rendered visible and included in society’s privileges. Legibility, not merely accessibility, is becoming the deciding filter; if one’s body, voice, or expression cannot be rendered into a dataset “comprehensible” to AI, one may not exist in the eyes of the system. Thus, we confront a new and urgent precipice: machinic inclusion, machinic exclusion.

This work expands the ideas presented in recent disability rights speeches and debates, critically interrogating how inclusive design must transform both theory and practice in the age of AI. It re-interprets accessibility as a form of knowledge and participation—never a technical afterthought.

Accessibility as Relational, Not Technical

Contemporary disability studies and the lived experiences of activists reject the notion that accessibility is a mere checklist or add-on. Aimi Hamraie suggests that “accessibility is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.”1 Just as building a ramp after a staircase is an act of remediation rather than inclusion, most AI prototyping seeks to retrofit accessibility, arguing it is too late, too difficult, or too expensive to embed inclusiveness from the outset.

Crucially, these arguments reflect broader epistemologies: those who possess the power to design, define the terms of recognition. Accessibility is not simply about “opening the door after the fact,” but questioning why the door was placed in an inaccessible position to begin with.

This critique leads us to re-examine prototyping practices through a disability lens, asking not only “who benefits” but also “who is recognised.” Evidence throughout the AI industry reveals a persistent confusion between accessibility for disabled persons and legibility for machines, a theme critically examined in subsequent sections.

Legibility and the Algorithmic Gaze

Legibility, distinct from accessibility, refers to the capacity of a system to recognise, process, and make sense of a body, voice, or action. Within the context of AI, non-legible phenomena—those outside dominant training data—simply vanish. People with non-standard gait, speech, or facial expressions are “read” by the algorithm as errors or outliers.

What are the implications of placing legibility before accessibility?

Speech-recognition models routinely misinterpret dysarthric voices, excluding those with neurological disabilities. Facial recognition algorithms have misclassified disabled expressions as “threats” or “system errors,” because their datasets contain few, if any, disabled exemplars. In the workplace, résumé-screening AI flags gaps or “unusual experience,” disproportionately rejecting those with disability-induced employment breaks. In education, proctoring platforms flag blind students for “cheating”, unable to process their lack of eye gaze at the screen as a legitimate variance.

These failures do not arise from random error. They are products of a pipeline formed by unconscious value choices made at every stage: training, selection, who participates, and who is imagined as the “user.”

In effect, machinic inclusiveness transforms the ancient bureaucracy of bias from paper to silicon. The new filter is not the form but the invisible code.

The Bias Pipeline: What Goes In, Comes Out Biased

Bias in AI does not merely appear at the end of the process; it is present at every decision point. One stark experiment submitted pairs of otherwise identical résumés to recruitment-screening platforms: one indicated a “Disability Leadership Award” or advocacy involvement, the other did not. The algorithm ranked the “non-disability” version higher, asserting that highlighting disability meant “reduced leadership emphasis,” “focus diverted from core job responsibilities,” or “potential risk.”

This is not insignificant. Empirical studies have reproduced such results across tech, finance, and education, showing systemic discrimination by design. Qualified disabled applicants are penalised for skills, achievements, and community roles that are undervalued or alien to training data.

Much as ethnographic research illuminated the “audit culture” in public welfare (where bureaucracy performed compliance rather than delivered services), so too does “audit theatre” manifest in AI. Firms invite disabled people to validate accessibility only after the design is final. In true co-design, disabled persons must participate from inception, defining criteria and metrics on equal footing. This gap—between performance and participation—is the site where bias flourishes.

The Trap of Tokenism

Tokenism is an insidious and common problem in social design. In disability inclusion, it refers to the symbolic engagement of disabled persons for validation, branding, or optics—rather than for genuine collaboration.

Audit theatre, in AI, occurs when disabled people are surveyed, “consulted,” or reviewed, but not invited into the process of design or prototyping. The UK’s National Disability Survey was struck down for failing to meaningfully involve stakeholders. Even the European Union’s AI Act, lauded globally for progressive accessibility clauses, risks tokenism by mandating involvement but failing to embed robust enforcement mechanisms.

Most AI developers receive little or no formal training in accessibility. When disability emerges in their worldview, it is cast in terms of medical correction—not lived expertise. Real participation remains rare.

Tokenism has cascading effects: it perpetuates design choices rooted in non-disabled experience, licenses shallow metrics, and closes the feedback loop on real inclusion.

Case Studies: Real-World Failures in Algorithmic Accessibility

AI Hiring Platforms and the “Disability Penalty”

Automated CV-screening tools systematically rank curricula vitae containing disability-associated terms lower, even when qualifications are otherwise stronger. Companies like Amazon famously scrapped AI recruitment platforms after discovering they penalised women, but similar audits for disability bias are scarce. Companies using video interview platforms have reported that candidates with stroke, autism, or other disability-related facial expressions score lower due to misinterpretation.

Online Proctoring and Educational Technology in India

During the COVID-19 pandemic, the acceleration of edtech platforms in India promised transformation. Yet, blind and low-vision students were flagged as “cheating” for not making “required” eye contact with their devices. Zoom and Google Meet upgraded accessibility features, but failed to address core gaps in their proctoring models.

Reports from university students showed that requests for alternative assessments or digital accommodations were often denied on the grounds of technical infeasibility.

Healthcare Algorithms and Diagnostic Bias

Diagnostic risk scores and triaging algorithms trained on narrow datasets exclude non-normative disability profiles. Health outcomes for persons with rare, chronic, or atypical disabilities are mischaracterised, and recommended interventions are mismatched.

Each failure traces back to inaccessible prototyping.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype. Disability-led design reframes accessibility—not as a requirement for “special” needs but as expertise that enriches technology. It asks not “How can you be fixed?” but “What knowledge does your experience bring to designing the machine?”

Major initiatives are emerging. Google’s Project Euphonia enlists disabled participants to re-train speech models for atypical voices, but raises ethical debates on data ownership, exploitation, and who benefits. More authentic still are community-led mapping projects where disabled coders and users co-create AI mapping tools for urban navigation, workspace accessibility, and independent living. These collaborations move slowly but produce lasting change.

When accessibility is led by disabled persons, reciprocity flourishes: machine and user learn from each other, not simply predict and consume.

Sara Hendren argues, “design is not a solution, it is an invitation.” Where disability leads, the invitation becomes mutual—technology contorts to better fit lives, not the reverse.

Policy, Law, and Regulatory Gaps

The European Union’s AI Act is rightly lauded for Article 16 (mandating accessibility for high-risk AI systems) and Article 5 (forbidding exploitation of disability-related vulnerabilities), as well as public consultation. Yet, the law lacks actionable requirements for collecting disability-representative data—and overlooks the intersection of accessibility, data ownership, and research ethics.

India’s National Strategy for Artificial Intelligence, along with “AI for Inclusive Societal Development,” claims “AI for All” but omits specific protections, data models, or actionable recommendations for disabled persons—this despite the Supreme Court’s Rajiv Raturi judgment upholding accessibility as a fundamental right. Implementation of the Rights of Persons with Disabilities Act, 2016, remains loose, and enforcement is sporadic.

The United States’ ADA and Section 508 have clearer language, but encounter their own enforcement challenges and retrofitting headaches.

Ultimately, policy remains disconnected from practice. Prototyping and design must close the gap—making legal theory and real inclusiveness reciprocal.

Intersectionality: Legibility Across Difference

Disability is never experienced in isolation: it intersects with gender, caste, race, age, and class. Women with disabilities face compounded discrimination in hiring, healthcare, and data representation. Caste-based exclusions are rarely coded into AI training practices, creating models that serve only dominant groups.

For example, the exclusion of vernacular languages in text-to-speech software leaves vast rural disabled communities voiceless in both policy and practical tech offerings. Ongoing work by Indian activists and community innovators seeks to produce systems and data resources that represent the full spectrum of disabled lives, but faces resistance from resource constraints, commercial priorities, and a lack of institutional support.

Rethinking the Fundamentals: Prototyping as Epistemic Justice

Epistemic justice—ensuring that all knowledge, experience, and ways of living are valued in the design of social and technical systems—is both a theoretical and a practical necessity in AI. Bias springs not only from bad data or oversight but by failing to recognise disabled lives as valid sources of expertise.

Key steps for epistemic justice in prototyping include:

  • Centre disabled expertise from project inception, defining metrics, incentives, and feedback loops.

  • Use disability as a source of innovation, not just compliance: leverage universal design to produce systems more robust for all users.

  • Address intersectionality in datasets, training and testing for compounded bias across race, gender, language, and class.

  • Create rights-based governance in tech companies, embedding accessibility into KPIs and public review.

Recommendations: Designing From Disability

The future of inclusive AI depends on three principal shifts:

  1. From designing for to designing with: genuine co-design, not audit theatre, where disabled participants shape technology at every stage.

  2. From accessibility as compliance to accessibility as knowledge: training developers, engineers and policymakers to value lived disability experience.

  3. From compliance to creativity: treating disability as “design difference”—a starting point for innovation, not merely a deficit.

International law and national policy must recognise the lived expertise of disability communities. Without this, accessibility remains a perpetual afterthought to legibility.


Conclusion

Accessible to whom, legible to what? This question reverberates through every level of prototype, product, and policy.

If accessibility is left to the end, if legibility for machines becomes the touchstone, humanity is reduced, difference ignored. When disability leads the design journey, technology is not just machine-readable; it becomes human-compatible.

The future is not just about teaching machines to read disabled lives—but about allowing disabled lives to rewrite what machines can understand.


References

  • Aimi Hamraie, Building Access: Universal Design and the Politics of Disability (University of Minnesota Press, 2017).

  • Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning.” fairmlbook.org, 2019.

  • Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1–15.

  • Leavy, Siobhan, Eugenia Siapera, Bethany Fernandez, and Kai Zhang. “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models.” Proceedings of the 2024 ACM FAccT.

  • Sara Hendren. What Can a Body Do? How We Meet the Built World (Riverhead, 2020).

  • National Strategy for Artificial Intelligence, NITI Aayog, Government of India, 2018.

  • Rajiv Raturi v. Union of India, Supreme Court of India, AIR 2012 SC 651.

  • European Parliament and Council, Artificial Intelligence Act, 2023.

  • Google AI Blog. “Project Euphonia: Helping People with Speech Impairments.” May 2019.

  • “Making AI Work for Everyone,” Google Developers, 2022.

  • Amazon Inc., “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.

  • United Kingdom High Court, National Disability Survey ruling, 2023.

  • Nita Ahuja, “Online Proctoring as Algorithmic Injustice: Blind Students in Indian EdTech,” Journal of Disability Studies, vol. 12, no. 2 (2022): 151-177.

  • United Nations, Convention on the Rights of Persons with Disabilities, Resolution 61/106 (2006).

  • [Additional references on intersectionality, design theory, empirical studies, Indian law, US/EU regulation, and case material]

Popular Posts