Summary

Artificial intelligence systems are trained on data that reflects the neurotypical majority. “Normal” is their baseline. This creates a fundamental problem for neurodivergent people: AI is, by default, a normalising technology. Every hiring algorithm that learns from successful employees, every emotion-detection system trained on neurotypical facial expressions, every educational platform that measures progress against typical developmental milestones encodes neurotypical patterns as the standard and flags departures as errors.

AI also has genuine potential to accommodate neurological difference in ways that were previously impossible: adaptive environments that respond to individual sensory profiles, communication tools that bridge between neurotypes, knowledge systems that make scattered information accessible. Umwelten is itself an example of AI-assisted knowledge work (see the About).

The real question is who designs it, what it optimises for, and whose definition of success it encodes.

What the evidence shows

The normalisation problem

AI systems learn patterns from training data. When that data represents mostly neurotypical behaviour, the system learns that neurotypical is normal and everything else is deviant. This is not a design flaw that can be patched. It is structural, arising from how machine learning works.

Hiring algorithms. An estimated 99% of Fortune 500 companies use AI recruitment tools. These tools systematically disadvantage autistic candidates: they weight eye contact, emotional affect matching, and social fluency as selection criteria, even when these are irrelevant to the job. AI word-embedding models (2024) show negative associations between autism-related terms and positive attributes like honesty — despite literal honesty being a well-documented autistic characteristic. The algorithms learn the biases present in the data of past “successful” hires, who are overwhelmingly neurotypical.

Emotion recognition AI. Facial expression analysis, voice tone analysis, and body language detection are deployed in schools, workplaces, and hiring processes. Autistic facial expression, prosody, and body language differ systematically from neurotypical norms. A 2023 study in Neural Computing and Applications on real-time facial emotion recognition showed accuracy drops significantly when tested on autistic children. Systems trained on neurotypical affect data interpret autistic stillness as calm when it may be shutdown, and autistic intensity as aggression when it may be engagement.

The EU AI Act (in force August 2024) banned emotion detection in employment contexts in recognition that these systems are unreliable and discriminatory. Enforcement is nascent, and the ban does not cover educational or care settings where the same technology is deployed.

Educational AI. Learning analytics systems flag “abnormal” engagement patterns — hyperfocus followed by disengagement is interpreted as non-compliance. Automated essay scoring penalises neurodivergent writing structures. Automated exam proctoring flags stimming as suspicious behaviour. AI “plagiarism detectors” may flag autistic writing styles (precise, systematic, atypical syntax) as AI-generated.

Content moderation. A 2023 Nature study found content moderation algorithms disproportionately harm autistic users, flagging context-dependent humour, literal language, and unconventional expression. Social media platforms designed for neurotypical social norms may silence the very communication styles that neurodivergent people use.

The surveillance problem

AI-powered screening and monitoring tools create infrastructure that can serve either support or surveillance, depending on who controls it.

Autism detection. AI systems can now detect autism from home videos, voice patterns, eye-tracking, electronic health records, and gait analysis. A 2025 meta-analysis found multimodal approaches achieve 80%+ accuracy. This has potential value for early identification — but it also creates surveillance infrastructure where autism identification is not opt-in, and where behavioural data collected “for diagnosis” can be repurposed for insurance, employment, or benefits decisions.

Prenatal and genetic screening. Polygenic screening for autism in embryo selection, while not yet available, is being actively developed. The Down syndrome precedent — where prenatal screening has led to near-total selective termination in some countries — casts a long shadow. The Autistic Self Advocacy Network (ASAN) and 80 disability rights organisations opposed the NIH’s proposed autism registry in May 2025, citing data privacy risks and the historical misuse of disability data. The question is not abstract: if autism can be screened for, who decides whether that screening leads to support or to selection?

Care facility monitoring. Wearable sensors, movement trackers, and camera-based monitoring systems in care settings for people with intellectual disabilities exist on a spectrum from genuinely supportive to coercively controlling. The person being monitored often has no say. Data privacy frameworks (GDPR Article 22, UNCRPD) exist but enforcement is weak, and informed consent for people with intellectual disabilities is largely symbolic in practice.

The masking machine

Some AI applications actively teach masking — suppressing autistic behaviour to perform neurotypicality — while presenting it as support.

Gamified “social skills” apps use AI to coach eye contact, reduce stimming, and train neurotypical conversation patterns. Robot-assisted social skills programmes improve metrics like “social response” and eye contact frequency, but these metrics conflate behaviour suppression with genuine improvement. The long-term evidence on masking is clear: it is associated with anxiety, depression, burnout, and reduced self-advocacy. See Masking and camouflaging.

David Ruttenberg’s analysis of autism AI tools (2024) identifies a specific failure mode: these systems optimise for observable behaviour and miss genuine internal states. An autistic person who goes quiet and still may be coded as “regulated” by systems trained on neurotypical data, when they are actually in acute distress. The system sees compliance and reports success.

The distinction matters: supporting autistic people to communicate, self-regulate, or develop skills they want is different from training compliance to neurotypical norms. The harm lies in conflating the two.

Real potential for accommodation

AI offers genuine accommodations:

Communication. AI-enhanced AAC (augmentative and alternative communication) can transform communication for non-verbal and minimally verbal people. Large language models improve prediction and reduce the “cold start” problem in AAC systems. However, 30–50% of AAC users abandon their systems because they were not designed with actual non-verbal users — a participation failure, not a technology failure.

Sensory environments. Smart building systems can adjust lighting, sound, and temperature based on individual sensory profiles. Adaptive lighting that avoids fluorescent flicker, automated sound management, and predictive adjustment before overload occurs are all technically feasible. Most current systems lack genuine adaptation — they use manual presets — but the trajectory is toward responsive environments. See Sensory-friendly design.

Personalised learning. AI systems that accommodate sensory profiles, learning pace, and communication style can make education more accessible. Interactive sensory walls using AI showed 40% longer engagement in pilot special education settings. But most adaptive learning systems still measure success against neurotypical baselines.

Knowledge and connection. AI can make scattered knowledge accessible — synthesising research, finding connections, translating between languages and registers. Umwelten is an example: AI-assisted research and drafting allows one person to produce a knowledge resource that would otherwise require a team.

The benefit materialises only if the system is designed with neurodivergent people, for their own goals, using their definition of success. Only 23% of AI systems designed for neurodivergent users included neurodivergent people in the design process. That statistic speaks for itself.

The predictive processing parallel

There is a suggestive (but easily over-extended) parallel between how AI and autistic brains process information. Both operate through prediction and pattern recognition. Both excel at detecting anomalies and fine-grained distinctions. Both can struggle with context-dependent flexibility.

The predictive processing theory of autism proposes that autistic brains assign different precision weights to sensory predictions versus sensory evidence (see Predictive processing and autism). Large language models also operate through prediction. Some researchers have explored whether autistic cognition and AI cognition share computational principles.

This parallel is intellectually interesting but risks dehumanisation (“autistic people are like machines”). It should not be used to suggest that autistic people should be more like AI, or that AI-as-prediction offers a model to “fix” autism. The parallel is descriptive, not prescriptive.

Open questions

How do AI systems affect autistic mental health over time? No longitudinal studies track outcomes for autistic people exposed to AI interventions over five or more years.

Can participatory AI design genuinely include people with intellectual disabilities, minimal speech, or very high support needs — not just articulate, university-educated autistic adults? The methods exist (multiple communication modes, extended timelines, explicit power-sharing) but they are rarely used.

How should AI literacy be taught to neurodivergent people, particularly those vulnerable to algorithmic manipulation? Social media algorithms exploit autistic pattern-seeking and special interests through variable-reward schedules; over half of ADHD and autism content online is misleading.

What does genuine accommodation look like in AI, as opposed to normalisation with a friendly interface? The EU AI Act’s accessibility clause (Article 16) gestures toward accommodation but does not mandate social-model thinking. An AI system can be “accessible” while still pushing users toward neurotypical norms.

Implications for practice

For anyone choosing AI tools for sensory support, education, or care:

Ask what the system optimises for. If it measures success by neurotypical benchmarks (eye contact, stillness, “appropriate” emotional expression), it is a normalisation tool regardless of what it calls itself.

Ask who designed it and with whom. If neurodivergent people were not involved in the design, the system encodes assumptions about them rather than knowledge from them.

Ask who controls the data. If the person being assessed or monitored cannot access, correct, or delete data about themselves, the system is surveillance, not support.

Ask whether it accommodates or adapts. A system that changes the environment to fit the person (accommodation) is aligned with the social model. A system that changes the person to fit the environment (adaptation/normalisation) is not — regardless of how gently it does so. See The accommodation-exposure question.

Be especially cautious with vulnerable populations. AI tools used with people with intellectual disabilities, who may not be able to consent to or understand data collection, require rigorous safeguarding. The benefit to the person — not to the institution — must drive adoption.

A note about this wiki

Umwelten is itself an AI-assisted project. The colophon describes the process: AI helps with research, drafting, and connection-finding; a human reviews and approves everything. This transparency is deliberate. If we are going to discuss AI’s relationship to neurodivergent minds, we should be honest about the fact that this discussion was itself produced through human-AI collaboration. See About.

Key sources

  • AI hiring bias and autism (Bloomberg Law, 2024; PMC, 2025)
  • Emotion recognition accuracy on autistic children (Neural Computing and Applications, 2023)
  • EU AI Act and disability provisions (OECD.AI; European Disability Forum)
  • Participatory AI design scoping review (CHI 2025)
  • ASAN positions on autism registry and genetic screening (2025)
  • David Ruttenberg, “The Invisible Safety Crisis” (2024)
  • Social media misinformation on ADHD/autism (Business Standard, 2026)