I’ve recently written a thread on Twitter that, at least by my standards, got pretty viral. In the thread I discuss the topic of effect sizes. Specifically, I look at how many hours you need to spend on Facebook so that it actually affects your well-being. Some people said they’d like to cite it, which is why I’m now also posting it here.
Some studies find small but significant relations between media use and mental health. For example, the meta-analysis by Huang (2017) finds a correlation between the use of social networking sites (SNS) and well-being of r = -.07.
Now, what does this mean? Let us assume that this correlation indeed represents a causal effect. Statistically speaking, this means that if SNS use increases by 1 standard deviation (SD), well-being decreases by .07 SDs. So far so good. But what does this *really* mean?
To find out, we should now be looking for the smallest effect size of interest (SESOI). (For a great illustration, see https://doi.org/10.1177/2515245918770963 by @lakens, @annemscheel, @peder_isager). In other words, what constitutes a meaningful difference in well-being?
Potential answer: Norman et al. (2003) found that for most health-related factors (e.g., depression) most people consider a change of SD = .5 to be meaningful. In other words, if a therapy decreases depression by half a standard deviation, people will feel the difference.
So, what does this mean for the effects of SNS on well-being? In a representative study of Germany with 1,435 SNS users, we found an average daily use of M = 51 mins, with a standard deviation of sd = 42 minutes.
Let us do the math: For well-being to change by .5 SDs, SNS use needs to increase by 7.14 SDs (0.5 / 0.07 = 7.14). 7.14 SDs in SNS use equal 300 minutes (42 * 7.14 = 299.88). In other words, to perceive a difference people need to change their SNS use by 5 hours a day.
If this sounds like a lot to you, I think you’re not alone. I personally would also reason that already less change in SNS time should have an effect on well-being. Nonetheless, this doesn’t change one thing:
We as researchers should care about what our statistically significant results actually mean in the world. Which is hard. But oftentimes, bold claims that are based on only p-values and small effect sizes just don’t hold up very well. Less is more.