Do wellness programs work? A unique study does much more than offer a possible answer

If you’re one of our regular readers I’ll bet your ears echo from hearing us bang these drums over and over:

  • When OBSERVATIONAL STUDIES find an association between two things it does NOT mean one thing CAUSED the other thing to happen.
  • The gold standard of evidence-based medicine is confirming or refuting findings via multiple RANDOMIZED CONTROLLED TRIALS (RCTs).

Finding examples illustrating the first bullet point is like falling off a log for us. Here are a few misleading headlines from observational studies published within just the past few weeks (click on the links to read our critique):

Hard-working women, go home early to avoid this disease (CNN)

Can smartphones trigger ADHD symptoms in teens? (HealthDay)

Motherhood may affect Alzheimer’s risk, study shows (NBC News)

Not only do observational studies lend themselves to exaggerated headlines like these that can mislead readers, but they can also lead those who should know better to promote glowing outcomes that just happen to align with their agendas. Outcomes that when tested more rigorously via RCTs don’t stand up.

A study smackdown: Observational vs RCT

A clear and clever illustration of this is highlighted in a provocative article written by pediatrician Aaron Carroll, MD, and published in the New York Times earlier this week:

The Ineffectiveness of Employer Wellness Programs, and the Importance of Randomized Trials

In the article Carroll tells us about a unique study published earlier this summer looking into the effectiveness of the wellness program at the University of Illinois at Champaign-Urbana. As Carroll points out most evaluations of wellness programs are observational by design. They’re quite prone to selection bias. So when results suggest participation in such programs leads to healthier outcomes it’s hard to tease out if those benefits are — as one of his sources points out — “due to differences in the people rather than differences from the [wellness] program.”

But the Illinois Workplace Wellness Study did something clever: It evaluated the program using both an observational AND a randomized controlled study approach.

The findings?

In a nutshell, every time the observational study suggested a benefit from participating in the program, the RCT analysis did not; in other words, no cause-and-effect benefit. Here are some of those findings:

Gym attendance: The observational approach suggested participants went to the gym 7.4 days/yr vs. nonparticipants = 3.8 days/yr … The RCT findings? 5.8 and 5.9 days/yr, respectively.

Healthcare spending: The observational approach suggested participants spent much less ($525 vs. $657) … The RCT findings? ($576 vs. $568)

Why we care about this

It’s not hard to imagine a journalist (or a university public relations writer) taking these observational findings and writing an article with headlines like these:

‘Wellness program participants exercise nearly twice as much as non-participants’

‘Need to save money? Join a wellness program’

Of course, these are hypothetical and you might think so what?

At issue is that nearly every day, of every week, for 12 years we’ve come across headlines such as these that seduce readers with misinformation based on observational findings that very likely — had they been subject to a more rigorous study approach — would have been shown to be inaccurate/incorrect [see our tips for writing better health headlines].

Aaron Carroll, MD

That’s the elegance and beauty of the Illinois Workplace Wellness Study and why we give kudos to Carroll for writing about it, and why we wish more mainstream news outlets had covered it. It’s an important study.

And we were glad to see Carroll bring up these two (paraphrased) caveats:

  1. Not everything can be studied with RCT’s. Sometimes the interventions aren’t ethically of financially feasible. Or the study effects won’t be realized for years or decades.
  2. Observational studies still have value. But they need to be approached skeptically with special attention to selection bias and confounding factors [see our primer on this].

A final thought worth bearing in mind is that not infrequently, when observational data support an institution’s agenda they are selectively highlighted, but when they do no not they’re often selectively downplayed [here’s an example].

This is a choice made far upstream at the source of much of our health care news. If journalists choose to report such imbalanced results without careful scrutiny (as in the 3 very real headlines listed above) it all but ensures that polluted health care information will be more widely disseminated.

That’s why we’d be well served to always keep the Illinois Workplace Wellness Study in mind.

Just as we’d be well served to wait and see if further RCT studies support or refute its findings.


Over 50 million Americans are enrolled in wellness programs. If you’d like to learn more, check out our podcast from last year:

Wellness Programs — Do they work?


Read The Rest at HealthNewsReview- (opens a new tab)





Pages

Archive