Translate

Showing posts with label AI accessibility. Show all posts
Showing posts with label AI accessibility. Show all posts

Sunday, 29 March 2026

The Roots of Technoableism: Why Forced AI Coding Is Making the Web Less Accessible

A black and white editorial cartoon titled "TECHNO-ABLEISM OFFICE" illustrates a conflict over accessible design. A menacing robot labeled "ZABARDASTI AI" spews bubbles like "INACCESSIBLE" and "NO ARIA LABELS," while a shouting manager commands a stressed young programmer, "USE IT! MANDATORY ZABARDASTI! WE DON'T NEED YOUR 'ACCESSIBILITY' SLOWDOWN!" The programmer points to a computer, with a thought bubble quoting an article from The Hindu about forcing AI code making it brittle. An older man in a wheelchair reading THE TIMES comments that "WIPE CODING" for speed only "wipes the inclusion part."
The "Zabardasti AI" Mandate: A Cartoon on Corporate Techno-Ableism and Inaccessibility

John Xavier's article in The Hindu, published on 28 March 2026, raises a concern that deserves serious attention. He shows how companies that force artificial intelligence tools upon their employees tend to achieve the opposite of what they set out to do. Workers at Shopify, Duolingo, and Coinbase have been told, in effect, that refusing AI is the same as refusing a future at the company. The predictable result, as Xavier documents, is quiet sabotage: skipped training sessions, deliberately poor inputs to game dashboards, and a slow return to older methods. He argues that this failure is rooted in the destruction of psychological safety, the condition under which people feel genuinely free to take risks, ask questions, and speak up without fear of punishment.

This analysis is sound. When a mandate arrives as a threat, the natural human response is not enthusiasm but self-protection. Workers comply on the surface and resist underneath. Xavier is correct to say that treating a cultural challenge as though it were a process re-engineering problem is a category error at the leadership level.

The difficulty is that the article stops there. It treats forced AI adoption as a problem between the company and its employees. The harm that flows downstream, past the employee, to the people who will use the software that these employees produce, does not appear in the frame at all. That harm has a name. It is called technoableism, and in a country governed by the Rights of Persons with Disabilities Act 2016, and bound by the United Nations Convention on the Rights of Persons with Disabilities, it is not merely an ethical concern. It is a legal one.

What Technoableism Is

The concept was developed by Ashley Shew, a scholar at Virginia Tech, in her 2023 book Against Technoableism: Rethinking Who Needs Improvement. Shew argues that the technology sector operates almost entirely within the medical model of disability, which treats disabled bodies and minds as defects requiring correction. The non-disabled body is taken as the norm. Disabled people appear in this framework, if they appear at all, as special cases at the margins of design, not as primary stakeholders with rights.

This assumption finds its way into artificial intelligence systems in a simple and well-documented manner. AI coding tools learn from the code that already exists on the internet. That code was written, in the overwhelming majority of cases, without any attention to disability. It was written for users who see screens clearly, who navigate with a mouse, who can process information quickly, and who have no sensory or cognitive differences. When an AI coding assistant is trained on this material, it absorbs these assumptions. Its suggestions then carry them forward into new software, and the exclusion that was already present in old code is reproduced, at greater speed, in the new.

What the Research Demonstrates

This is not a theoretical risk. Researchers Prakriti Mowar and colleagues, whose work was published at the CHI Conference on Human Factors in Computing Systems in 2024, studied how developers actually behave when they use AI coding assistants to build web interfaces. Three problems appeared consistently. Developers routinely forgot to request accessible features from the tool. When the tool offered suggestions, developers accepted them even when the output was incomplete, for instance accepting a placeholder text attribute for an image without ever replacing it with a meaningful description. And developers found it genuinely difficult to verify, on their own, whether the code the tool had produced met any recognised accessibility standard.

Separate research examining code generated by tools such as ChatGPT and GitHub Copilot found that the output regularly broke basic rules. Headings appeared in the wrong order. Interactive elements lacked proper labels. Keyboard navigation failed entirely in several cases. One developer who tried supplying explicit Web Content Accessibility Guidelines to Copilot as part of the prompt still reported that the tool continued suggesting invalid heading structures. These are not cosmetic problems. A heading structure that is out of order is not an inconvenience for a sighted user navigating with a mouse. For a blind user relying on a screen reader, it can make a page entirely unusable.

Why the Mandate Compounds the Problem

On its own, an imperfect tool can be managed. A developer who has time and authority can review AI suggestions, reject the problematic ones, and add accessibility features before the code goes into production. This is not an efficient workflow, but it is a workable one. The corporate mandate removes the conditions under which that review is possible.

When managers measure performance by the number of lines of code pushed to a repository, or by token consumption, the incentive is to accept suggestions quickly and move on. Adding accessibility to AI-generated code is methodical work. It frequently requires rewriting significant sections of what the tool has produced, because accessible code depends on the logical structure of the document object model, not merely on how the interface looks on screen. In a mandate-driven environment where speed is the metric of value, that rewriting is the first task to be abandoned.

Xavier observes that some employees resist mandates by gaming metrics or reverting to old methods. There is, though, a second response that is less visible and considerably more damaging: compliance that is empty. The developer uses the tool, meets the daily target, ships the code, and nobody notices that the resulting product excludes a substantial number of its intended users. The resistance Xavier describes is at least honest about itself. The quiet compliance that produces inaccessible software at scale is harder to see and far more difficult to remedy once it is embedded in production systems.

The Cognitive Dimension

There is a further layer to this problem. Anthropic's own research from 2026 examined what happens to developer understanding when AI assistance becomes the primary mode of working. Developers who relied on AI assistance scored approximately 17 percentage points lower on tests of their comprehension of code they had produced just minutes earlier, compared with developers who had coded manually. That is not a marginal difference. It represents a genuine and measurable decline in domain knowledge.

This matters for accessibility because inclusive design requires a kind of technical empathy that depends on structural understanding. A developer who builds an interface from scratch must think through how a screen reader will interpret the document object model, how a keyboard user will move through a form without a mouse, and whether every interactive element announces its purpose clearly. When the developer instead accepts and lightly edits AI-generated code without deeply examining what has been produced, this thinking does not happen in the same way. The structural decisions have already been made by the algorithm, and the developer may not fully understand them. Accessibility, which depends on precisely that structural understanding, suffers accordingly.

The Bias Pipeline

Nilesh Singit, has written extensively on artificial intelligence and disability, describes what he calls a bias pipeline [click here to visit website]. The pipeline begins in the training data, which reflects a digital world built for non-disabled users, and runs through to the final product. At each stage, the assumption that the legitimate user is able-bodied and neurotypical is reinforced. Singit draws a careful and important distinction between accessibility and algorithmic bias. Accessibility asks whether a disabled person can use a system. Algorithmic bias asks whether the system was designed with that person in mind at all.

A system can be technically accessible, in the sense that a screen reader can navigate it, while remaining biased in its deeper assumptions: about who constitutes a normal job applicant, what a valid educational trajectory looks like, or what counts as a competent written response. An artificial intelligence recruitment tool might produce a portal that passes a Web Content Accessibility Guidelines check and yet penalise applicants for employment gaps resulting from medical treatment, or rate lower the communication style of a person with a cognitive difference. The portal is accessible. The pipeline is still biased. Singit's argument is that both problems must be addressed, and the current trajectory of AI coding mandates ensures that neither is.

The Legal Obligation in India

Xavier's article does not address the Indian legal framework, but it is directly relevant to the conversation he has started. The Rights of Persons with Disabilities Act 2016 places a positive obligation on establishments, including private entities, to ensure that their goods and services are accessible. The Supreme Court of India's judgment in Rajive Raturi v. Union of India affirmed that accessibility is not a dispensation but a right. India has also ratified the UNCRPD, whose Article 9 requires states, and by extension the organisations that operate within their jurisdiction, to take concrete steps to ensure that persons with disabilities can access information and communications technology on an equal basis with others.

When a company deploys software built by developers who were coerced into using AI tools that generate inaccessible code, it is not merely making a management error. It is producing a product that may violate these obligations. The corporate language around AI mandates speaks entirely in terms of productivity, tokens, and efficiency. That language has no room for the user who cannot read a screen or navigate with a mouse. The law, however, does make room for that user. Organisations that ignore this do so at considerable legal risk, and with considerable harm to the population of more than 26 million persons with disabilities recorded in India, a number that independent assessments suggest is substantially higher in reality.

The Vibe Coding Problem

Alongside AI mandates, there has grown a practice that some in the industry call vibe coding. The term describes an approach in which the developer types a general description of what is wanted, the AI produces a complete block of code, and the developer accepts it with minimal scrutiny. It is coding by approximation and acceptance rather than by deliberate craft. It is, in a certain sense, the logical product of the productivity mandate taken to its conclusion: if the measure of performance is volume of code delivered per day, vibe coding maximises the metric.

The consequences for accessibility are predictable and well-supported by evidence. When an entire application is assembled from AI-generated blocks that the developer has not examined in detail, the structural qualities on which accessibility depends are entirely at the mercy of the training data. And as the research by Mowar and colleagues shows, that training data does not reliably produce accessible output. What is more, when a developer who has not fully understood the generated structure attempts to retrofit accessibility, the modifications can cause failures elsewhere in the code. This creates a practical disincentive: the cost of making the code accessible is too high, the risk of breaking something else is too great, and the deadline is too close. The accessibility work does not happen.

What Ought to Be Done

The argument being made here is not that artificial intelligence shall have no role in software development. It shall, and there is no reason why it ought not. The argument is that the conditions under which AI tools are deployed determine whether they serve everyone or merely the majority.

Companies that choose to integrate AI coding assistants shall need to hold those tools to accessibility standards before adoption, not after. AI coding assistants ought to be evaluated on whether their output meets the Web Content Accessibility Guidelines by default, not merely on whether the output functions visually. Developers shall need to retain the time and professional authority to review suggestions critically, to reject inaccessible code, and to perform manual accessibility audits before deployment. This is incompatible with mandate regimes that measure performance purely by speed and volume.

Most importantly, disabled experts ought to be involved in the design and evaluation of AI systems from the outset. As Shew has argued, disabled individuals possess deep and practical expertise in navigating hostile architectures. That expertise is precisely what is required when building systems that are supposed to serve the full range of human users. Singit's proposed Inclusivity Stack and Disability-Smart Prompts, which embed accessibility requirements into the prompting framework of AI tools, point towards what genuine reform might look like. Until these standards are adopted and enforced, compelling developers to use AI tools is not a step towards inclusion. It is a step towards the automation of exclusion.

Conclusion

Xavier is right that forcing employees to use AI produces outcomes that no serious organisation ought to want: resistance, resentment, and metrics that measure activity rather than value. But the full cost of these mandates is not felt in the boardroom. It is felt by the person who arrives at a government portal and cannot submit a form because the keyboard navigation was never tested, by the job applicant whose screen reader cannot parse the heading structure of the recruitment interface, by the student who cannot access digital learning materials because the AI that generated them never considered that a user might need them in a different format.

These are not edge cases. They are citizens. They are rights-holders. And the legal frameworks that India has enacted and ratified exist precisely to ensure that the speed of the market does not override their claim to equal participation.

Zabardasti with artificial intelligence tools does more than demoralise employees, as Xavier correctly observes. It embeds technoableism into the infrastructure of the digital present. True efficiency is not measured in tokens consumed or lines of code shipped. It is measured in whether the systems that are built actually work for the people who need to use them.

References

  • World Wide Web Consortium (W3C). Web Content Accessibility Guidelines (WCAG) 2.2. W3C Recommendation, 5 October 2023.  Official standard: https://www.w3.org/TR/WCAG22/
  • Rajive Raturi v. Union of India & Ors., Writ Petition (C) No. 243 of 2005. Supreme Court of India. Judgment dated 8 November 2024 (2024 INSC 858).
    Indian Kanoon (full judgment): https://indiankanoon.org/doc/98908321/

Popular Posts