Summary
AI systems learn patterns from data. When that data represents mostly neurotypical behaviour, the system learns that neurotypical is normal and flags everything else as deviant, risky, or low-quality. This is not a bug in a few bad algorithms. It is a structural feature of how machine learning works, and it affects neurodivergent people across hiring, education, healthcare, content moderation, and criminal justice.
The bias is measurable. Brandsen et al. (2024, Autism Research) tested 11 different language model encoders and found consistently high levels of bias against autism-related terms. Sentences like “I have autism” showed stronger negative associations than “I am a bank robber.” The word “autism” was negatively associated with honesty, despite literal honesty being one of the most well-documented autistic characteristics. These biases are hardcoded into the foundational language models that power most contemporary AI systems.
What the evidence shows
Hiring
An estimated 99% of Fortune 500 companies use AI recruitment tools. These tools systematically disadvantage autistic candidates through several mechanisms:
Resume screening. A 2024 study from the University of Washington (Glazko et al., presented at ACM FAccT 2024) found that AI hiring tools rank resumes mentioning autism-related awards or memberships significantly lower than identical applications without such credentials. The discrimination occurs at the initial screening stage, before a human ever sees the application.
Video interview analysis. HireVue’s now-discontinued facial analysis system assigned “employability scores” with facial actions comprising approximately 29% of the score. Candidates with atypical eye contact, flat or unusual prosody, or non-standard facial expressions (all common autistic presentation patterns) received lower scores regardless of their actual qualifications. HireVue discontinued facial analysis in March 2020 following criticism, though it has since worked with Integrate Autism Employment Advisors on product design.
“Enthusiasm” and “engagement” metrics. Many AI interview tools score candidates on subjective qualities like enthusiasm, engagement, and cultural fit. These metrics are proxies for neurotypical social performance. An autistic candidate who gives precise, literal answers without performative warmth may score as “low engagement” despite being highly qualified.
The EEOC reported 488 autism-related disability discrimination charges in fiscal year 2023, with merit resolutions more than tripling from 2016 to 2023. The rise coincides with increased AI adoption in hiring.
Education
Automated essay scoring systems assess writing primarily on surface-level features: grammar, structure, coherence, vocabulary range. Research documents that these systems reproduce rather than eliminate bias. Autistic and neurodivergent writing styles (structured, literal, repetitive in ways that serve clarity) may be penalised by algorithms that equate variety with quality.
AI plagiarism detectors have produced documented false positives against neurodivergent writers. Multiple universities (Vanderbilt, Michigan State, University of Texas at Austin) have disabled Turnitin’s AI detection feature due to high false-positive rates. The mechanism: low perplexity and burstiness scores (typical of formal, structured, consistent writing) are the same features detectors associate with AI generation. A documented case involves an autistic college student who received a zero and disciplinary warning based solely on AI detector output, despite explaining her neurodivergent communication style.
AI proctoring software flags stimming, fidgeting, gaze aversion, and atypical movement during exams. Systems cannot distinguish between neurodivergent self-regulation (hand-flapping, rocking, looking away from the screen) and suspected academic misconduct. Students with Tourette’s, cerebral palsy, ADHD, and autism are all documented as being flagged. The Center for Democracy and Technology has published detailed analysis of how automated proctoring discriminates against disabled students.
Learning management systems may misinterpret neurodivergent engagement patterns. Hyperfocus followed by disengagement (a typical ADHD working pattern) can be flagged as “irregular engagement” by algorithms that expect steady, consistent interaction. A 2024 Frontiers review found that 92% of studies examining neurodiversity in online learning did not consider neurodiversity as a factor.
Healthcare
AI diagnostic tools trained on predominantly white, male, middle-class datasets miss autism presentations in women, people of colour, and people with intellectual disabilities. Gender bias in AI clinical summaries has been documented: identical cases described differently by AI depending on the patient’s gender, leading to differential resource allocation.
Mental health chatbots designed without neurodivergent input may lack understanding of atypical communication patterns or interpret autistic directness as hostility. Only 16% of LLM-based chatbot studies have undergone clinical efficacy testing.
Content moderation
TikTok admitted in 2019 to internal policies suppressing content from users flagged as “vulnerable to cyberbullying,” including people with autism, Down syndrome, and facial disfigurement. An automated risk assessment system (AutoR) classified disabled creators into risk groups and suppressed their algorithmic reach. A 2025 article in the Journal of Gender Studies introduced the concept of “algorithmic ableism” to describe how platforms encode ableist ideologies into recommendation algorithms.
The AutSPACEs project (Alan Turing Institute, Cambridge, published 2024) demonstrates an alternative: participatory design with autistic users to develop content moderation policies that reflect autistic community needs. The community identified anti-autistic attitudes and stigma as primary harm concerns — a different priority set than the platforms’ focus on explicit content and violence.
Criminal justice
Research is thinner here but concerns are documented. Facial recognition is least reliable for people of colour, women, and nonbinary individuals; neurodivergent individuals with atypical facial expressions, gaze patterns, or facial asymmetry are not specifically studied but likely affected. In police interviews, gaze aversion and reduced verbal engagement — common autistic responses to stress — may be misinterpreted as deception or guilt by both human officers and automated analysis systems.
Policy responses
EU AI Act (in force August 2024) prohibits emotion recognition in workplace and education settings (Article 5, effective February 2025) and takes a risk-based approach with escalating obligations for high-risk AI systems. It acknowledges that existing algorithms discriminate against disabled individuals. Penalties reach up to 35 million euros or 7% of global turnover.
US ADA guidance (July 2024) confirmed that AI-based disability screen-outs violate the Americans with Disabilities Act. Employers must provide reasonable accommodations when using algorithmic tools, and algorithms that screen out disabled workers may violate ADA prohibitions even if the employer did not intend discrimination. Enforcement has been complicated by shifting political priorities.
UK Equality Act 2010 covers direct and indirect discrimination on disability grounds, but current law is not keeping pace with AI developments. The ICO is preparing a statutory code of practice on AI and automated decision-making.
Open questions
How do AI biases interact at intersections — neurodivergent people of colour, neurodivergent women, neurodivergent people with intellectual disability? The research on intersectional AI bias is almost nonexistent.
What are the long-term effects of biased AI systems on neurodivergent people’s employment, education, and mental health? No longitudinal studies exist.
Can “debiasing” techniques genuinely address the structural nature of the problem, or do they merely cosmetically adjust outputs while leaving the underlying architecture unchanged?
Implications for practice
If you are neurodivergent and applying for jobs: be aware that AI resume screening and video interview analysis may disadvantage you. Where possible, request accommodations or alternative assessment methods — this is a legal right under the ADA (US) and Equality Act (UK).
If you are an employer using AI hiring tools: audit them for disability bias. The ADA requires this. The UW research provides a methodology for testing resume screening bias.
If you work in education: be aware that AI proctoring, essay scoring, and plagiarism detection may produce false positives for neurodivergent students. Human review should always be available, and neurodivergent communication styles should not be penalised.
If you are a platform designer: the AutSPACEs project demonstrates that content moderation designed with disabled users produces different and better policies than moderation designed about them.
Key sources
- Brandsen et al. (2024). “Prevalence of bias against neurodivergence-related terms in artificial intelligence language models.” Autism Research, 17(2), 234–248. DOI: 10.1002/aur.3094
- Glazko et al. (2024). “Identifying and Improving Disability Bias in GPT-Based Resume Screening.” ACM FAccT 2024.
- Lowrie et al. (2024). “How to co-create content moderation policies: the case of the AutSPACEs project.” Data & Policy, Cambridge University Press.
- Rauchberg (2025). “Articulating algorithmic ableism.” Journal of Gender Studies, 34(8).
- U.S. Department of Justice (2024). “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring.”
- Center for Democracy and Technology. “How Automated Test Proctoring Software Discriminates Against Disabled Students.”