Summary
The evidence base for sensory processing interventions is substantially weaker than public perception suggests. This is not because interventions donât workâsome clearly do, for some people, some of the time. It is because sensory processingâs nature makes conventional research methods difficult to apply, and practitioner enthusiasm has historically outrun empirical validation.
This page exists to help readers evaluate claims honestly. An evidence-transparent approach requires acknowledging what we know, what we donât, and why the gaps exist.
What the evidence shows
The state of play
Systematic reviews consistently describe the sensory intervention literature with the same vocabulary: considerable heterogeneity, limited study quality, high risk of bias, limited follow-up, lack of treatment fidelity. A comprehensive AHRQ review found some interventions may produce modest short-term improvements, but the evidence base is small and durability beyond the intervention period is unclear.
This is slowly improving. Ayres Sensory Integration now has strong evidence for individualised goal attainment when delivered with fidelity. Snoezelen and multi-sensory environments show large effect sizes for people with intellectual disabilities. These are islands of stronger evidence in a sea of mixed and limited findings.
Why the evidence is weak
The reasons are structural. Different sensory interventions apply different theoretical constructs, focus on different goals, use different sensory modalities, and involve different procedures. A sensory diet, an ASI session, a weighted blanket, and a Snoezelen room have almost nothing in common except the word âsensory.â Meta-analysis across such varied interventions is nearly meaningless.
Sensory processing varies enormously between individuals. The right intervention for one person may be useless or harmful for another. RCTs, which assume homogeneous groups responding predictably to standardised doses, are a poor fit for highly personalised interventions. The quality that makes good sensory practice effective (individualisation) makes it hard to study.
What one therapist calls âsensory integrationâ may look completely different from what another delivers. Intervention protocols vary wildly in dose and delivery: 18 sessions over 6 weeks versus 30 sessions over 10 weeks, with different activities, settings, and therapist training. Until fidelity measures were introduced (around 2011 for ASI), many studies were evaluating interventions that existed in name only.
What does âbetter sensory processingâ mean? Self-regulation? Behaviour change? Subjective comfort? Functional participation? Different studies use different metrics, making comparison impossible. Goal Attainment Scaling captures individual progress but lacks standardisation for meta-analysis.
How do you blind a child or therapist to whether theyâre receiving sensory therapy? You canât, in most cases. This creates unavoidable bias: therapists who believe in the intervention may unconsciously prompt improvement, and participants who know theyâre receiving treatment may report benefits.
The deeper tension
A genuine philosophical tension exists between person-centred practice and evidence-based practice. Evidence-based practice prioritises treatments validated through population-level research. Person-centred practice prioritises treatments tailored to individuals. When an intervention works because it is different for each person, population-level validation is inherently difficult.
This does not mean evidence doesnât matter. It means the type of evidence available for sensory interventions will often be case studies, single-case designs, practice-based evidence, and practitioner consensus rather than large RCTs. This is not inferior evidenceâit is appropriate evidence for the intervention type being studied. But it means claims should be proportionate.
Open questions
Can sensory interventions be studied rigorously without destroying the individualisation that makes them work? Some researchers argue for pragmatic trials, single-case experimental designs, and n-of-1 studies as alternatives to traditional RCTs. These approaches preserve individualisation while providing controlled evidence. The field is slowly moving in this direction.
Implications for practice
Use sensory interventions based on the best available evidence, individualised to the person youâre supporting, with clear goals and systematic tracking. If something isnât working after a reasonable trial, change it. If something is working, document how and whyâyour observations contribute to the evidence base.
When you encounter a sensory intervention claiming to be âevidence-based,â ask what that means. How many studies? What kind of participants? What was measured? How long was the follow-up? âEvidence-basedâ is not binaryâitâs a spectrum, and most sensory interventions fall somewhere in the uncertain middle.
Key sources
- AHRQ systematic review on interventions targeting sensory challenges in autism (2017)
- AJOT systematic reviews of Ayres Sensory Integration (2025)
- Frontiers in Integrative Neuroscience review of sensory integration/processing treatment (2020)