Why Mental Health Breakthroughs Keep Disappointing

Early psychiatric trials almost always look better than later ones. Here's a three-step guide to reading the evidence before the hype sets in.

Published by – Sevs Armando

Why Promising Mental Health Treatments Keep Disappointing: A Guide to Reading the Evidence

In the early 1990s, a class of drugs called SSRIs rewrote psychiatry almost overnight. Prozac, approved by the FDA in 1987, was on the cover of Newsweek by 1990. Psychiatrists, patients, and the media treated it as a decisive break from the dark ages of antidepressant treatment. Decades later, researchers found the original trial data told a more complicated story. Many negative trials had never been published. The effect size, when all trials were pooled, was real but modest. The revolution had been partly editorial.

That pattern repeats. Every decade or so, a new class of mental health treatment arrives with extraordinary early results and cultural momentum. The results cool. The conversation moves on.

Psychedelics are living through that cycle right now.

mental-health-research-hype-clinical-trial-guide
mental-health-research-hype-clinical-trial-guide

The Excitement Trap: Why Early Mental Health Research Always Looks Better Than It Is

Call it the Pilot Study Illusion: the reliable tendency for early, small trials in psychiatry to show dramatic results that later, larger trials fail to replicate. It's not fraud. It's the predictable output of how research funding, publication incentives, and clinical hope interact.

Small trials are underpowered. When a sample is small, random variation matters more. A group of 20 patients who happen to respond well can produce a striking result. Journals are more likely to publish positive findings than null ones, a well-documented pattern called publication bias, which the BMJ has covered extensively. Researchers who believe in a treatment work harder to retain patients, design supportive environments, and interpret ambiguous results charitably. None of that is dishonest. All of it inflates the apparent effect.

The psychedelic literature is a textbook case. Balázs Szigeti at UCSF reviewed 24 open-label trials for his 2026 meta-analysis in JAMA Psychiatry and had to discard nearly 600 studies before arriving at a usable comparison. The eight psychedelic studies that made the cut involved a combined total of 249 patients. The antidepressant trials involved 7,921. The disparity in scale alone should recalibrate how the earlier excitement gets read.

A Three-Step Framework for Reading Mental Health Research Without Getting Burned

Step 1: Check the sample size before reading the result. A trial with fewer than 100 patients is hypothesis-generating, not practice-changing. It tells you a treatment is worth studying further. It does not tell you the treatment works. When a headline announces a "breakthrough" in depression treatment, the first number to find is the N. If it's below 100, the correct response is cautious interest, not changed behavior or changed prescriptions.

Step 2: Ask whether blinding was possible. Blinding is what separates a clean trial from an educated guess. When patients can tell whether they received the active treatment or the placebo, their expectations contaminate the result. Psychedelics fail this test completely: a hallucinogenic experience is unmistakable. Cannabis, stimulants, and many pain interventions face the same problem. A study that cannot blind its participants should report a smaller apparent effect, not a larger one. When you see a large effect in an unblinded trial, treat it skeptically.

Step 3: Distinguish between effect size and statistical significance. A result can be statistically significant and clinically meaningless. In the German psilocybin trial published in JAMA Psychiatry in March 2026, patients showed measurable improvement at six weeks. That improvement was not statistically significant compared to the placebo group. But even if it had been, two points on a depression scale may not translate into a person's daily life in any meaningful way. Ask both questions: is the result significant, and is the effect large enough to matter?

Novelty Bias: Why New Treatments Feel Like Better Treatments

The psychological pattern driving most of the psychedelic hype has a name: novelty bias, the cognitive tendency to assign higher value to new or unfamiliar things simply because they're new. It's distinct from being optimistic about innovation. Novelty bias operates before the evidence arrives. The newness itself functions as a signal of quality.

In mental health specifically, novelty bias gets amplified by desperation. David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, has noted that psychiatry has seen little meaningful innovation in antidepressant treatment since SSRIs arrived roughly four decades ago. When a field has been waiting that long for something new, the first candidate that looks promising doesn't just get evaluated. It gets celebrated.

That celebration shapes coverage, funding, and clinical practice before the evidence is ready. Researchers who work with psychedelics report patients arriving with fixed expectations, sometimes drawn from social media or podcast interviews with researchers who themselves haven't seen the large-trial data. Those expectations don't just reflect the hype. They feed back into the trials via the placebo and knowcebo mechanisms Szigeti's work describes.

Recognizing novelty bias doesn't mean dismissing new treatments. Psilocybin may still prove useful for specific patient groups, specific conditions, or specific delivery formats that haven't been tested yet. The evidence from current trials is preliminary: that's a clinical term meaning the sample sizes are too small and the designs too imperfect to draw firm conclusions. It doesn't mean the compound doesn't work. It means we don't know yet at the level of rigor required to change prescribing.

The correct response to preliminary evidence is continued research, not adoption or rejection. The thing worth watching is whether psychedelics can survive properly blinded, large-scale trials. A handful of those are currently underway. Their results will be more informative than everything published so far combined.

Hold the excitement until then. It'll be more useful.

This is exactly the kind of analysis we publish every week for The Science Impact subscribers, before it reaches mainstream news cycles. Subscribe free. Stay a step ahead.