Summary

AI can now detect autism from home videos, voice patterns, eye-tracking, electronic health records, and gait analysis. Polygenic risk scores for autism are being developed. Wearable sensors and camera-based monitoring systems are deployed in care facilities for people with intellectual disabilities. Each of these technologies has potential benefits: earlier identification, better support, safer environments. Each also creates infrastructure that can serve surveillance, selection, and control. The question is not whether the technology works but who controls it, who benefits, and what happens to the data.

This is where the neurodiversity movement’s concerns about technology are most urgent and most justified.

What the evidence shows

AI autism detection

SenseToKnow (Duke University, NIH-funded) is a tablet-based app that analyses gaze, facial expressions, head movements, and response to name. It achieves an AUC of 0.92 and sensitivity of 83–88% across gender and racial groups. The positive predictive value (the probability that a positive screen is correct) is much lower: only 13% for the youngest toddlers and 40% for children aged 24–36 months. This means most positive screens in younger children are false positives.

Other systems in development use eye-tracking, voice analysis, home video analysis, movement pattern detection, and EHR-based prediction. A 2025 scoping review found multimodal approaches (combining several data sources) achieve 80%+ accuracy.

The clinical case for early detection is real: earlier identification can lead to earlier support, and sensory differences are often observable in infancy before social-communication features become apparent (see Grace Baranek’s work on early sensory markers). Earlier detection only helps if it leads to support rather than surveillance, accommodation rather than normalisation.

The surveillance concern

AI screening tools create infrastructure. A system that analyses home videos to detect autism collects facial data from children. A system that monitors eye-tracking in classrooms collects continuous biometric data. An EHR-based prediction system flags children before they or their parents have sought assessment.

The Autistic Self Advocacy Network (ASAN) and 80 disability rights organisations opposed the NIH’s proposed national autism registry in May 2025. Their concerns were specific: potential for autistic people’s private medical data to be shared without consent, lack of meaningful engagement with autistic people in planning, and the need for “fundamental privacy safeguards to prevent misuse and abuse.” The NIH initially announced the registry; HHS contradicted this within days, claiming no new registry would be created. The episode demonstrated both the institutional appetite for autism data collection and the community’s capacity to resist it.

Genetic testing and prenatal screening

No commercial prenatal test for autism currently exists as routine screening. But the technology is developing. Polygenic risk scores for autism are being refined, and embryo selection based on polygenic traits is an active area of reproductive medicine.

The American Society for Reproductive Medicine concluded in December 2025 that preimplantation genetic testing for polygenic disorders (PGT-P) “is not ready for clinical practice and should not be offered as a reproductive service.” But the technology is advancing faster than the guidelines.

The Spectrum 10K controversy (2021) illustrated the community’s concern. A large-scale UK autism genomics study led by Simon Baron-Cohen at Cambridge planned to collect genetic data from 10,000+ autistic people. The autistic community responded with a boycott campaign, citing fear that genetic data would be used for “eugenic purposes in the future” — specifically creating prenatal tests to prevent autistic people from existing. The project was paused for community consultation.

The Down syndrome parallel is the reference point. Prenatal screening for Down syndrome is routine in many countries, and termination rates following diagnosis reach 90%+ in some contexts. Autistic self-advocates are explicit: they do not want this trajectory repeated for autism. The right to exist as a neurodivergent person is non-negotiable.

Care facility surveillance

Wearable GPS devices, heart rate monitors, camera systems, and automated alert systems are used in residential care for people with intellectual disabilities. The stated justifications (elopement prevention, fall detection, health monitoring) are often legitimate. Research documents that people with intellectual disabilities in care settings already experience “lack of privacy and minimal control over their privacy,” and those with significant intellectual impairment have often “accepted that their privacy is not their own to manage through a life of invasive personal care.”

The UNCRPD (Convention on the Rights of Persons with Disabilities) positions autonomy and informed consent as central rights. Surveillance technology that is imposed without genuine consent (which is often the case for people with ID who cannot independently withdraw consent) sits in tension with these rights regardless of its safety justification.

GDPR Article 22 gives data subjects the right not to be subject to decisions “based solely on automated processing, including profiling, which produces legal effects.” The article requires explicit, informed consent, which raises profound questions for people with intellectual disability who may not understand what they are consenting to. Proxy consent (given by a guardian) does not clearly satisfy the GDPR’s requirement that consent be “freely given” when the person themselves cannot withdraw it.

GINA (the US Genetic Information Nondiscrimination Act) prohibits employers from using genetic information in hiring but does not cover life insurance, disability insurance, or long-term care insurance. It focuses on “detection of genetic information,” not on interpretive analyses and risk scores derived from it. AI systems that predict traits from genetic data may fall through this gap.

Open questions

How close are we to routine prenatal autism screening? The technology is not ready, but the trajectory is clear. The disability community needs to engage now, not after the tests are available.

Can surveillance technology in care settings be redesigned to give the person being monitored genuine control — including the ability to turn it off? This is a design question with ethical implications.

What happens to autism screening data over time? Can it follow a person from childhood through education, employment, and insurance? The legal protections are thinner than most people assume.

Implications for practice

If you are involved in autism research involving genetic data: the Spectrum 10K episode demonstrated that proceeding without autistic community consultation is not acceptable. Participatory governance — not just participatory design — is required.

If you work in care settings considering surveillance technology: the question is not “does this make the person safer?” but “does the person (or their genuine advocate) want this, and can they control or refuse it?”

If you are an autistic person or family member: your data — genetic, behavioural, biometric — has value and vulnerability. You have the right to know what is collected, how it is used, and who has access.

Key sources

  • SenseToKnow validation: Duke University Psychiatry & Behavioral Sciences, multiple publications.
  • ASAN and ACLU (May 2025). Letter to HHS on proposed autism registry.
  • Spectrum 10K documentation: The Transmitter, NeuroClastic, Thinking Person’s Guide to Autism.
  • ASRM (December 2025). Statement on PGT-P.
  • GDPR Article 22: automated decision-making provisions.
  • GINA (2008): scope and limitations.