Abstract: This article examines the urgent need for NITI Aayog to revisit and strengthen India's artificial intelligence policy framework, particularly concerning the rights and inclusion of persons with disabilities. Drawing from the landmark Supreme Court judgment in Rajive Raturi vs. Union of India (2024) and the comprehensive findings of the NALSAR Centre for Disability Studies report “Finding Sizes for All,” this analysis demonstrates significant policy gaps in India’s approach to AI governance. The article presents a comparative analysis of the European Union’s Artificial Intelligence Act, which mandates accessibility requirements for high-risk AI systems, and argues for a paradigm shift from aspirational guidelines to mandatory standards. It contends that whilst the panel convened by NITI Aayog asserted that existing laws can address AI-related risks, this position overlooks the specific vulnerabilities of persons with disabilities in AI-driven systems. The article advocates for proactive policy intervention that embeds disability inclusion as a fundamental principle in India’s AI governance architecture, rather than treating it as an afterthought.
India stands at a critical juncture in its technological evolution. A high-powered government committee on artificial intelligence recently concluded that India does not need a separate AI law at this stagetimesofindia.indiatimes.com. The panel’s India AI Governance Guidelines assert that existing sectoral laws (on IT, data protection, consumer protection, and criminal or civil liability) can be adapted to govern most AI applicationstimesofindia.indiatimes.com. In their view, the focus should instead be on timely and consistent enforcement of these laws. However, this stance – reported by the Times of India on November 6, 2025 – reveals a troubling oversight. By assuming that “existing laws…can govern AI applications” and a separate law is “not needed”timesofindia.indiatimes.com, the panel inadvertently minimizes the novel ways in which AI can harm vulnerable populations.
One group in particular is at risk of being left behind: India’s 27.4 million persons with disabilities. AI systems are increasingly used in education, healthcare, employment, finance, and welfare. Without explicit safeguards, these systems can amplify barriers rather than break them down. The panel’s conclusion largely ignores these dynamics. Indeed, experts warn that AI will also entrench biases against other marginalized communities in India – for example, predictive policing and screening algorithms have already disproportionately targeted Muslims, Dalits, women and other minority groupscontext.newscontext.news. These examples show AI does not operate in a social vacuum: it magnifies existing fault lines of caste, religion, gender and ability. While a full discussion of those harms is beyond this scope, it is vital to note that persons with disabilities face unique intersectional vulnerabilities that demand attention. For instance, many disability advocates have observed that AI-based systems often classify disabled individuals as statistical “outliers” or even threats, because they deviate from assumed norms. If AI is poised to “exacerbate bias and discrimination” against Dalits, Muslims, transgender people and otherscontext.news, then neglecting disability – which affects 1 in 10 Indians – is an equally grave error.
The timing of the panel’s pronouncement is particularly significant. On 8 November 2024, the Supreme Court of India delivered a landmark judgment in Rajive Raturi vs. Union of India, fundamentally reshaping the landscape of accessibility rights for persons with disabilities. In this case, the Court held that rules intended to ensure accessible public spaces had no enforceable “floor” – creating “a ceiling without a floor”. The Court directed the Union government to promulgate mandatory accessibility rules (as required under Section 40 of the RPwD Act, 2016) within three months, working with experts and Disabled Persons’ Organisations (DPOs)disabilityrightsindia.com. It emphatically rejected the idea that accessibility could remain voluntary or aspirational, insisting that non-negotiable minimum standards must apply. In the Chief Justice’s words, while accessibility does require progressive realization, “this cannot mean that there is no base level of non-negotiable rules that must be adhered to”.
The Rajive Raturi judgment was informed in large part by the NALSAR Centre for Disability Studies report “Finding Sizes for All” (2024), which documented decades of systemic failure to make India’s laws on paper translate into real inclusion in practice. That report – prepared through consultations with disabled individuals and disability experts across the country – found that even where laws promise accessibility, the absence of enforcement means many persons with disabilities remain excluded by default. In light of this, the Court’s directive provides a clarion call to policymakers: Accessibility is a human right, not a decorative add-on.
These developments should be a wake-up call for India’s AI policy. If courts and scholars now insist that a rights-based approach demands enforceable accessibility standards, it follows that AI – which increasingly mediates core rights – cannot be left to old rules alone. Indeed, the European Union’s pioneering Artificial Intelligence Act, which came into force in August 2024, offers instructive lessons. The EU Act embeds disability inclusion into its very structure: it bans AI that “exploit[s] the vulnerabilities of a person…due to…disability”, and requires high-risk AI systems to be accessible by designai-act-service-desk.ec.europa.eu. As a European Disability Forum (EDF) infographic bluntly puts it, there are “101 million reasons to build better AI” – a reminder that roughly one-quarter of the EU population has a disability and deserves inclusive technologykatekagioglidis.com.
India’s debate should not be framed as “innovation versus rights”. Rather, it is a choice between inclusive innovation and innovation that cements the exclusion of millions. The question is not merely whether existing laws cover AI; it is whether they can actually protect marginalized people in practice. As this analysis will show, the answer is no. Across sectors – from exam halls to hospitals to welfare offices – AI currently poses very real risks to people with disabilities. Existing legal frameworks can remedy discrimination after the fact, but AI’s scale, speed and opacity demand proactive rules. In the words of India’s Supreme Court: “accessibility is not merely a convenience, but a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally”. That principle must now inform India’s AI governance.
The Rajive Raturi case has a long history. It began as a 2005 public interest litigation by Rajive Raturi, a visually impaired activist, seeking accessible design of public buildings, roads and transport under the Rights of Persons with Disabilities (RPwD) Act, 2016. Over nearly two decades, the Supreme Court repeatedly reiterated those commitments, ordering an 11-point accessibility plan for states in 2017. Yet by 2023, compliance remained desperately low. The Court then appointed the NALSAR Centre for Disability Studies (CDS) to study the status of accessibility in public infrastructure, resulting in “Finding Sizes for All” (July 2024).
When the Court reviewed the situation on 8 November 2024, it found that Rule 15 of the RPwD Rules, 2017 – which was supposed to prescribe accessibility standards under Section 40 of the Act – had become nothing more than a self-regulatory guideline. The Rules contained no mandatory obligations, only an aspirational list of measures. The Court held that this “ceiling without a floor” violated the very intent of the Act. Repeating CDS’s finding that the Act contemplates non-negotiable rules, the Court observed that Rule 15’s language made compliance optional, contrary to Parliament’s clear mandate. As the judgment bluntly states:
“Rule 15, in its current form, does not provide for non-negotiable compulsory standards, but only persuasive guidelines… While the intention of the RPwD Act to use compulsion is clear… the RPwD Rules have transformed into self-regulation by way of delegated legislation. The absence of compulsion in the Rules is contrary to the intent of the RPwD Act… Rule 15 creates an aspirational ceiling, through the guidelines prescribed by it, [but] it is unable to perform the function entrusted to it by the RPwD Act, i.e., to create a non-negotiable floor. A ceiling without a floor is hardly a sturdy structure.”
Following this reasoning, the Court ordered the Union Government to frame mandatory accessibility rules within three months, separating out the non-negotiable requirements from the current guidelinesdisabilityrightsindia.com. Importantly, this process must involve consultation with persons with disabilities and their organizations (the judgment even directed NALSAR-CDS to assist). The Court clarified that this does not freeze progress – existing “progressive” standards remain – but adds an enforceable baseline. Once the new rules are in place, governments must use all powers under Sections 44–46 of the RPwD Act (such as withholding building completion certificates and imposing fines) to enforce themdisabilityrightsindia.comdisabilityrightsindia.com.
The Rajive Raturi judgment has clear lessons for AI. If mandatory accessibility is now deemed essential for physical infrastructure, how much more must it apply to the digital domain of AI systems? The Court’s logic is universal: accessibility is a prerequisite for equal participation in education, employment, healthcare and public lifedisabilityrightsindia.com. Denying that baseline – whether by relying on voluntary standards in brick-and-mortar spaces or on sectoral laws in the digital realm – amounts to turning equality into a lofty ideal rather than a lived reality. In short, if technology is reshaping how rights are accessed, the law must reshape too. The Rajive Raturi case thus demands that AI governance embed enforceable accessibility standards, not waive them.
The European Union’s Artificial Intelligence Act (enacted June 2024) is the world’s first comprehensive AI law. It follows a risk-based approach, banning only the most dangerous AI but imposing strict rules on “high-risk” systems. For disability inclusion, the AI Act goes much further than India’s existing framework. One of its cornerstone provisions is Article 5(1)(b), which explicitly prohibits any AI that “exploit[s] the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation.” This ban covers not just intentional misuse but also unintended harms. In practical terms, any AI system (for example, manipulative marketing or exploitative surveillance) that takes advantage of disability-related traits to influence behaviour or outcomes is out of bounds. This is a powerful safeguard: it recognizes that disability is often an axis of vulnerability, and it refuses to let AI businesses exploit it.
For AI systems that pose “high risk” to fundamental rights (such as those used in hiring, education, healthcare, public services or law enforcement), the EU AI Act layers on accessibility requirements. Article 16(l) mandates that providers of high-risk AI systems ensure compliance with established accessibility standardsai-act-service-desk.ec.europa.eu. In practice, this means the AI interface and output must meet EU accessibility directives (the Web Accessibility Directive 2016/2102 and the European Accessibility Act 2019/882). Developers must build in accessibility “by design,” following harmonized standards like EN 301 549 (accessibility for ICT products) and EN 17161 (Design for All)ai-act-service-desk.ec.europa.eu. The accompanying Recital 80 makes the rationale explicit: as signatories to the UN Convention on the Rights of Persons with Disabilities, the EU and its members are legally bound to protect disabled people from discrimination and to promote their equality. It thus “mandates that high-risk AI systems must be designed following a universal design approach… from the start,” ensuring full and equal access.
In addition to accessibility, the EU Act systematically addresses disability in its data and oversight provisions. Article 10 requires that training, validation, and testing data be carefully curated. Providers must examine data for biases that could disadvantage persons with disabilities and take steps to minimize any such bias. In other words, AI companies cannot plead ignorance if their systems mirror society’s prejudices; they must actively avoid producing disabling outcomes. The Act also requires high-risk AI developers to conduct Fundamental Rights Impact Assessments and to consult with representatives of marginalized groups – explicitly including persons with disabilities – before deployment. This enshrines the principle of “nothing about us without us” into the AI development lifecycle. If an AI tool could affect disabled people, the law forces its creators to engage directly with disabled advocates in advance.
The EU law is backed by enforcement teeth. Member states must create authorities to oversee AI and can impose hefty fines (up to €35 million or 7% of turnover) for violations. Equality bodies and consumer protection regulators gain powers to demand access to AI documentation and to order technical tests if rights violations are suspected. Perhaps most importantly for India’s comparison, the EU rejects the idea that these measures are mere suggestions: accessibility, anti-discrimination and universal design are mandatory features of high-risk AI.
EDF, the European Disability Forum, has been deeply involved in implementing the AI Act. They commissioned infographics and toolkits (including one titled “101 million reasons to build better AI”katekagioglidis.com) to raise awareness. As one EDF resource notes, roughly 27% of EU adults have a disability – a huge constituency that AI must serve. Their materials highlight both opportunities (e.g. “improved accessibility” and “greater independence” through assistive AI) and dangers (for instance, being flagged as fraud or suffering a “life-threatening accident” if a self-driving car fails to detect someone with a disability)katekagioglidis.comkatekagioglidis.com. The message is clear: inclusive AI is technically feasible and socially essential.
Of course, the EU Act is not perfect. It currently does not impose accessibility on general-purpose AI or non-high-risk systems – a gap EDF warns could leave many assistive tools unchecked. Nonetheless, the core principles offer a blueprint: proactive regulation, centered on human rights. India can learn from this example. The Indian AI policy framework should emulate the EU’s approach by explicitly embedding disability safeguards.
Contrasting sharply with the EU’s proactive measures is India’s current posture. The high-level panel’s view – that no new AI law is needed, only enforcement of existing rulestimesofindia.indiatimes.com – underestimates the distinctive nature of AI risks. To understand why, consider that India does have strong disability rights legislation on the books. The RPwD Act, 2016 was enacted to implement the UNCRPD’s promise that disabled people should have equal access to information, education, and services. Section 40 of the RPwD Act explicitly mandates that government must ensure persons with disabilities can use electronic and ICT goods. Section 46 requires that all information-provision programs incorporate accessibility standards. In theory, these provisions could cover AI systems under “electronic goods” and “information programs.”
But as the Rajive Raturi judgment found, the devil is in the details. The Act anticipated specific accessibility rules (non-negotiable minimum standards) via Rule 15 of the RPwD Rules, 2017. In practice, however, Rule 15 contained only voluntary guidelines. The Supreme Court held that the Rule was ultra vires the Act because it offered no enforcementdisabilityrightsindia.com. In essence, India has the law, but not the legally binding rules, for disability inclusion in digital access.
This regulatory failure carries over to AI. If we do not even enforce baseline accessibility for old technologies (like websites and mobile apps of government), how can we trust that those laws will shield disabled people from sophisticated algorithmic harms? AI systems introduce new challenges: they make decisions at scale, often in opaque ways, and can inadvertently codify historical biases. For example, an AI proctoring system may misinterpret a student’s stimming behavior or need for screen readers as “cheating.” Without built-in accommodations, a visually or cognitively disabled examinee could be unjustly penalized. (In fact, data from Western universities show that automated proctoring flags students with disabilities at much higher rates than their peers.) In India, recent Supreme Court cases have flagged persistent failures to provide reasonable accommodations in exams like NEET. Introducing AI tools that are oblivious to these accommodations would institutionalize exclusion at scale.
Similarly, in employment and recruitment, AI promises efficiency but often replicates past discrimination. If algorithms learn from historical hiring data that contains biases against disabled candidates, the AI will perpetuate that bias. Studies by the AI Now Institute found that commercial AI hiring tools can “massively discriminate” against disabled jobseekers. This is especially dangerous in India: persons with disabilities have an abysmal labour force participation rate (around 34%, well below the general population), and the RPwD Act’s 4% reservation quota is frequently ignored. Without explicit requirements for AI recruiters to include diverse disability representation in training datasets and to allow accommodations in interviews, AI could make it even harder for disabled applicants to get a foot in the door.
Healthcare is another area of acute concern. Imagine a disabled patient using an AI-powered chatbot or diagnostic app that is not accessible: maybe it lacks text-to-speech, or presents information too quickly to be readable by those with cognitive impairments. A Swedish study found just that scenario – an AI-driven patient service was largely unusable to disabled users. In India’s context, where digital health platforms (like Ayushman Bharat Digital Mission) are expanding, the stakes are life and death. The Constitution’s Article 21 guarantees the right to life and personal liberty, which courts have interpreted to include the right to health. If healthcare AI cannot accommodate disability (for example, wheelchair users facing inaccessible online appointment systems), it could violate this fundamental right by rendering care effectively inaccessible. Yet current Indian law has no proactive check on such systems before deployment.
Financial services, too, raise red flags. Binns and Kirkham (2020) documented cases where banks’ AI used irrelevant cues – like typographical capitalisation – for credit scoring, disadvantaging people with dyslexia. In India, despite the RPwD Act’s provision of non-discrimination in financial services, we have no regulation ensuring AI lenders treat disabled applicants fairly. A deaf or visually impaired person applying for credit through an online portal could be rejected by a loan algorithm that doesn’t understand their communication needs or misreads their application. Without mandated design standards or bias audits, such opaque decisions could slip through the cracks.
Perhaps most troubling is the use of AI in welfare and public benefits. India’s Aadhaar-based systems and e-governance schemes increasingly use algorithms to detect fraud or determine eligibility. Disabled people, by virtue of having atypical health or income patterns, could easily trigger these systems’ anomaly detectors. For example, Denmark’s experience with AI-based welfare fraud detection has shown that disabled claimants were flagged at much higher rates, because their medical costs or joblessness deviated from “typical” models. In India, the absence of mandatory Fundamental Rights Impact Assessments for these systems means a blind eye could be turned to systemic exclusion. If a disability pension algorithm mistakenly concludes a claimant is ineligible due to an arbitrary pattern, there is currently no legally required impact audit or accountability mechanism in place.
In sum, the NITI Aayog panel’s assertion that “existing rules can address the majority of risks”timesofindia.indiatimes.com fundamentally misunderstands how AI operates. Traditional legal remedies – filing a discrimination complaint, for instance – are poorly suited to algorithmic harm. By the time a systemic bias is detected and proven in court, thousands may have been wronged. The “black box” nature of many AI models compounds this: even a highly educated person may not know why a machine-learning system denied them a service. These challenges call for a forward-looking approach, not a rear-view reliance on case-by-case adjudication.
Crucially, the Rajive Raturi court’s observation applies even more strongly here. The Court criticized India’s disability law for having “a ceiling without a floor.” In the AI context, India has no floor at all – not even aspirational rules specifically for AI. The RPwD Act and general equality guarantees provide a vision, but without actionable standards for new technologies, that vision cannot be realized. India needs, as it does with physical infrastructure, mandatory, enforceable AI norms that embed accessibility and fairness, rather than hoping existing vaguely applicable laws will suffice.
To drive this point home, it helps to examine concrete AI applications in India that should be considered “high-risk” because of their impact on basic rights and services. Under the EU model, high-risk domains include education, employment, essential public services, and law enforcement – and these map closely to Indian priorities.
Education: During the COVID-19 lockdown, India saw a boom in online learning and AI-driven examination systems. Yet the National Education Policy 2020’s rhetoric on inclusion has not always translated into reality. If an AI admission algorithm or an AI grading system cannot account for students’ special needs (like reading exams aloud, permitting extended time, or alternative question formats), it will systematically under-serve disabled learners. The 2020 UK “algorithm exam scandal” – where an AI grading model disproportionately downgraded students from underprivileged backgrounds and those requiring accommodations – is a cautionary tale. Indian courts have repeatedly had to order accommodations in standardized tests (for instance, allowing extra time in NEET for visually impaired students). Replacing these accommodations with “one-size-fits-all” AI proctoring or evaluation tools could institutionalize discrimination. Unless such educational AI systems are designed from the outset to include all students, they risk creating new barriers to learning for persons with disabilities.
Employment and Recruitment: With only about one-third of working-age persons with disabilities in the labor force, inclusive hiring is crucial. AI recruitment platforms could in principle help by screening large applicant pools without human prejudice. But in reality, if these tools are trained on past data in which disabled candidates were underrepresented or unfairly filtered out, the AI will learn to replicate those biases. The RPwD Act mandates reservation in jobs, but no entity is currently checking that AI recruiters comply. Without legal requirements for inclusive datasets and interface accessibility (for example, ensuring video interview platforms work with assistive technologies, or that resume parsers can interpret alternative CV formats), employers using AI will unintentionally sideline disabled applicants.
Essential Services – Healthcare: Government health programs are rapidly going digital in India. AI-powered symptom checkers, diagnostic chatbots or appointment systems are becoming common. Yet consider a blind person trying to use a healthcare app that has no speech output, or an autistic person struggling with unstructured medical AI chat interfaces. The Royal Institute of Technology researchers warn that inaccessible AI health tools effectively bar disabled patients from care. In India, where basic healthcare can be hard to access even offline, adding digital barriers is unconscionable. Article 21 of our Constitution guarantees the right to life and health; denying access to health information or services on the basis of disability would violate that guarantee. This is not a hypothetical: imagine an AI system prioritizing critical care based on algorithmic risk scores that were never tested on disabled populations. Without mandatory audits and accessibility compliance, such systems could tip the balance between life and death for vulnerable patients.
Essential Services – Public Benefits: Digital platforms deliver pensions, subsidies, and disability allowances in India. Many of these are linked to Aadhaar and driven by automated eligibility checks. Persons with disabilities, who often have irregular income and healthcare patterns, are prime candidates to trip fraud detection algorithms. Countries like the UK and Denmark have seen disabled welfare recipients disproportionately investigated by AI fraud tools. In India, the chilling effect of being flagged as an anomaly is high: a wrong denial of benefits could devastate a disabled person’s household. Without required Fundamental Rights Impact Assessments (akin to human rights audits) for such systems, or protections against disability profiling, the poor already face the risk of digital exclusion.
Biometric Identification and Surveillance: India’s extensive use of biometric systems (Aadhaar, facial recognition at services, etc.) poses unique threats to disabled individuals. Studies have shown that facial recognition algorithms perform poorly on people with Down syndrome or certain motor impairments – often misidentifying or failing to recognize them. Such errors can lead to wrongful exclusion from services (imagine a person denied boarding because an AI camera didn’t identify their age correctly). With AI increasingly embedded in security checks and benefit disbursement, these technical failures translate into civil rights infringements. In the absence of mandated bias testing across diverse bodies, these systems remain ticking time bombs.
Social Scoring and Profiling: While India does not (yet) have a formal social credit system, AI-driven profiling is creeping in. For example, lenders, insurers or employers might use AI to assess “trustworthiness.” The European Disability Forum warns that disabled people – who “deviate from the norm” statistically and often have intersecting marginalized identities – are particularly vulnerable to discriminatory profiling. An algorithm that lowers credit scores for having frequent medical visits (common for chronic disability) or flags unusual card usage patterns could ruin a disabled person’s financial standing. India’s legal framework broadly forbids discrimination in financial services, but without proactive AI-specific rules, such digital profiling could slip under the radar. Article 5(1)(c) of the EU Act explicitly prohibits social scoring; India’s AI policy should consider a similar ban on using disability data in adverse profiling.
None of these scenarios are far-fetched. They are logical extrapolations of how technology works today. The NITI Aayog panel’s faith that “existing rules can address the majority of risks”timesofindia.indiatimes.com overlooks these on-the-ground realities. Traditional anti-discrimination laws only help after someone proves a wrong occurred – a remedy too little, too late in the age of AI. By the time a biased AI is taken to court, its automatic decisions will have already affected thousands. Moreover, the very nature of AI decisions – often algorithmic and proprietary – can make it nearly impossible for an individual to even identify that they were discriminated against. In these conditions, a purely retrospective, case-by-case approach is insufficient. The RPwD Act itself was built on a vision of “progressive realization” of rights, but the Supreme Court has repeatedly stressed that progressive ambition must be anchored by non-negotiable floorsdisabilityrightsindia.com. India’s AI policy needs the same philosophy: neither laissez-faire optimism nor crippled micromanagement, but clear, enforceable standards to ensure technology serves disability inclusion from the outset.
The convergence of the Rajive Raturi judgment, the NALSAR-CDS report, and international best practices from the EU AI Act provides a roadmap for how NITI Aayog should reform India’s AI policy. The following recommendations outline this path:
India must establish clear, legally binding accessibility requirements for AI, especially those classified as high-risk. Drawing on the EU model, high-risk AI should include any system that significantly affects people’s fundamental rights or essential services – for example, AI used in education (exam grading, tutoring), employment (hiring, workplace accommodations), healthcare (diagnosis, telemedicine), social security, and law enforcement. One approach is to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules, 2021, to explicitly define high-risk AI and mandate accessibility obligations for them. These obligations should reference recognized frameworks, such as the EN 301 549 standard and universal design principles, to ensure uniformity and efficacy. In practice, this could require AI developers to demonstrate compliance (for instance, via certification or audits) before deployment. The Supreme Court’s demand for mandatory rules in Raturi applies here: accessibility in AI cannot remain a voluntary “guideline”; it must be a baseline requirement with real consequences for non-compliancedisabilityrightsindia.comai-act-service-desk.ec.europa.eu.
Accessibility must not be retrofitted. Any AI system targeting the general public should be designed under the principle of “universal design.” This means involving disability experts and real users in the very conception of the technology. For example, if the government funds an AI-based education app, its project plan should include co-design workshops with students who have visual, hearing or cognitive impairments. The Rajive Raturi ruling implicitly stressed this two-pronged approach – fixing the past while ensuring the future is different. Similarly, EU’s Recital 80 explicitly requires that new technologies comply with accessibility requirements by designartificialintelligenceact.eu. NITI Aayog can adopt guidelines to operationalize universal design in AI development, so that disabled users are considered at every step of the technical lifecycle.
The UN Convention on the Rights of Persons with Disabilities enshrines “nothing about us without us.” This must be operationalized in AI governance. NITI Aayog should create a permanent advisory committee on AI and disability, including representatives from Disabled Persons’ Organisations (DPOs), the National Centre for Promotion of Employment for Disabled People, and other civil society groups. This committee would have formal consultation rights on any AI regulation or high-risk deployment by the government. Drawing from Article 27 of the EU AI Act (which mandates consulting marginalized groups), India’s policy could require that any new high-impact AI project has documented input from the disability community. This ensures that lived experience informs policy, and it builds trust by making inclusion a collaborative process.
Before any high-risk AI system is deployed by a public body or in essential private services, a mandatory Fundamental Rights Impact Assessment (FRIA) should be conducted. This would function as a rights-based audit, similar to environmental or equality impact assessments. The assessment template should be developed by the National Human Rights Commission in consultation with disability rights experts. It must specifically analyze impacts on persons with disabilities: Are the outputs accessible? Does the training data include disabled people? Could the AI’s logic inadvertently penalize disability-related patterns? For example, if a welfare AI model uses income and health data, the FRIA must check that it doesn’t set thresholds that systematically exclude those on disability pensions. All high-risk AI projects should publish their FRIAs publicly before deployment. This transparency forces designers to confront potential harms in advance and allows civil society to hold deployers accountable.
Just as Article 5(1)(b) of the EU Act forbids AI that takes advantage of disability vulnerabilities, India’s framework should include a similar ban. This means outlawing AI practices that manipulate or exploit disabled people’s circumstances in a harmful way. For instance, personalized marketing algorithms should not target elderly or cognitive-impaired users with deceptive ads. Educational AIs should not pester neurodiverse students in ways they cannot easily navigate. A legal provision could read: “AI systems shall not be used to knowingly or unknowingly exploit disability-related vulnerabilities to produce decisions that result in significant harm.” Including both intentional and unintentional exploitation, this rule would act as a backstop to dissuade risky designs. It should cover fields like advertising, user interfaces, and even AI-driven persuasion (such as nudges or behavioral modification systems).
To prevent discrimination, India must require that all high-risk AI systems undergo rigorous bias testing on disability grounds. Echoing Article 10 of the EU Act, developers should be mandated to curate training datasets that represent persons with various disabilities. For example, facial recognition AI should be tested on images of people using wheelchairs, with prosthetics, or with neurodiverse facial expressions. If a hiring AI filters video interviews, its facial and vocal recognition must work for voices of people with speech impediments or accents typical of disability accommodations. The government’s AI research division or standards body should issue guidelines on such testing protocols. Any discovered bias (e.g. a disproportionate false-negative rate for a subgroup) must be corrected before the system goes live. This technical rigor is not an extra burden but a necessary step to uphold the RPwD Act’s promise of equality.
Even the strongest rules mean little without enforcement. India needs empowered bodies to oversee AI compliance. One possibility is to expand the mandate of existing institutions: for example, empower the Chief Commissioner for Persons with Disabilities with the authority to investigate AI complaints, similar to how equality bodies operate in the EU. These bodies should have the power to inspect technical documentation, demand audits, and penalize violations. Penalties must be meaningful: deploying a high-risk AI without required accessibility or without performing a FRIA should carry hefty fines or even temporary suspension orders. As the EU imposes fines up to 7% of global turnover for infringements, India should calibrate penalties to ensure they act as a deterrent, not a cost of doing business. Even amending the Information Technology Act, 2000 to include AI-specific offences could create teeth behind the guidelines.
While the Rajive Raturi court acknowledged that achieving full accessibility will take time, it insisted on an immediate baseline of mandatory rulesdisabilityrightsindia.com. NITI Aayog should adopt the same logic for AI. India can take a phased approach – for example, by immediately banning exploitative AI and requiring FRIA/audits, while giving developers a limited transition period to fully implement accessibility features. But the key is that some standards come into force now and are not merely aspirational. The EU’s timeline (where prohibitions took effect in 2025 and high-risk rules by 2027) shows that regulation can be progressive yet firm. India could similarly set milestones (e.g. accessibility audits within one year, full compliance in three years) with enforcement mechanisms kicking in from the start. The baseline must be mandatory: ADAis systems deployed in 2027 should already meet the 2024 standards, not gradually approach them over a decade.
The assertion that existing laws suffice to handle AI may be comforting on paper, but on the ground it is seriously inadequate from a disability rights perspective. This analysis has shown that many AI-driven decisions – in exams, jobs, health, finance, or welfare – are poised to disadvantage persons with disabilities unless specific measures are taken. The Rajive Raturi judgment exposed the profound gap between aspirational disability rights and enforceable reality. It makes no sense to apply that lesson to buildings and buses but not to the algorithms that increasingly shape people’s lives. The NALSAR–CDS report “Finding Sizes for All” documented how Indians with disabilities repeatedly find their legal entitlements to accessibility unrealized in practice. The European Union’s AI Act demonstrates that comprehensive regulation can embed accessibility, universal design, and anti-discrimination protections as mandatory requirements, not optional features.
India stands at a crossroads. The rapid proliferation of AI systems in education, employment, healthcare, financial services, and government programs presents unprecedented opportunities for inclusion – or the risk of entrenching exclusion. The choice is not between innovation and regulation, but between inclusive innovation and innovation that perpetuates discrimination. NITI Aayog, as India’s premier policy think tank, bears responsibility for steering AI governance to align with our constitutional values of equality and dignity, our statutory obligations under the RPwD Act 2016, and our international commitments under the UNCRPD. The recommendations outlined here – mandatory accessibility standards, universal design, DPO consultation, rights-based impact assessments, bias testing, and enforcement – are not radical ideas. They are measures necessary to honor the promises that India has already made.
The Supreme Court’s declaration in Rajive Raturi must reverberate beyond the realm of ramps and Braille. When the Court insisted on “a baseline of non-negotiable rules” for accessibility, it articulated a principle equally applicable to AI. Artificial intelligence, which now mediates access to fundamental rights and essential services, demands the same non-negotiable standards. If India truly aspires to be a global AI leader, it must do so without compromising the rights of millions of its citizens. Designing AI systems that are inclusive, accessible and non-discriminatory from the outset is not only a moral imperative but a technical and economic advantage – technology that works better for all users.
The time for policy relook is now. As AI becomes more deeply embedded in Indian society, the costs of retrofitting accessibility or fixing algorithmic bias rise exponentially. NITI Aayog must move beyond the position that existing laws suffice and proactively develop a disability-inclusive AI governance framework. The Rajive Raturi judgment, the NALSAR-CDS report, and the EU AI Act together provide a blueprint. India’s 27.4 million persons with disabilities – and indeed all Indians who value equality – deserve nothing less than mandatory, enforceable standards that ensure artificial intelligence serves inclusion rather than exclusion. Accessibility is not a convenience; as India’s Supreme Court put it, it is “a fundamental requirement…to exercise [one’s] rights fully and equally”. Let that principle guide our path forward in AI policy.
References
Rajive Raturi v. Union of India, Writ Petition (Civil) No. 243 of 2005, Supreme Court of India judgment, 8 November 2024.
NALSAR University of Law, Centre for Disability Studies. Finding Sizes for All: A Report on the Status of the Right to Accessibility in India. Hyderabad: NALSAR-CDS, 2024.
Rights of Persons with Disabilities Act, 2016 (Act No. 49 of 2016), Government of India.
Rights of Persons with Disabilities Rules, 2017, Government of India.
“Don’t Need Separate Law for AI: Panel,” The Times of India, 6 November 2025.
Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (AI Act), Official Journal of the European Union, 1 August 2024.
European Commission. Guidelines on Prohibited Artificial Intelligence (AI) Practices Defined by the AI Act. Brussels, 2025.
European Commission. Guidelines on the Definition of AI Systems to Facilitate the AI Act’s Application. Brussels, 2025.
European Commission. Guidelines on the Scope of Obligations for Providers of General-Purpose AI Models under the AI Act. Brussels, 2025.
European Disability Forum (EDF). AI Act Implementation Toolkit – Version 1. Brussels: EDF, October 2024.
European Disability Forum (EDF). AI Act Implementation Toolkit – Version 2. Brussels: EDF, October 2025.
European Disability Forum (EDF). Plug and Pray? A Disability Perspective on Artificial Intelligence, Automated Decision-Making and Emerging Technologies. Brussels: EDF, 2018.
European Disability Forum (EDF). 101 Million Reasons to Build Better Artificial Intelligence (Accessible Infographic). Brussels: EDF, 2025.
United Nations. Convention on the Rights of Persons with Disabilities (UNCRPD), 2006.
Directive (EU) 2019/882 (European Accessibility Act), Official Journal of the European Union, 7 June 2019.
Directive (EU) 2016/2102 (Web Accessibility Directive), Official Journal of the European Union, 26 October 2016.
ETSI EN 301 549. Accessibility Requirements for ICT Products and Services. ETSI/CEN/CENELEC, current edition.
CEN/CENELEC EN 17161. Design for All — Accessibility Following a Design for All Approach in Products, Goods and Services, current edition.
The Bias Pipeline — Disability and Technology in India. https://thebiaspipeline.nileshsingit.org