Today’s post is the third, and final, one in a series providing guidance to teachers on how to interpret education research.
Who Cares About Effect Sizes?
Cara Jackson currently serves as the president of the Association for Education Finance & Policy. She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland:
Education leaders need to know how much to expect a program or practice to improve student outcomes. Such information can inform decisions about what programs to invest in and which ones to stop, saving teachers’ time and energy for programs with the most potential.
In this post, I discuss what “effect sizes” are, why effect sizes from well-designed studies are not the same as correlational evidence, and why that matters.
What is an “effect size,” and how is it measured?
An effect size is a standardized measure of how large a difference or relationship is between groups. Researchers use standard deviation units to measure the difference. While researchers may translate the standard deviation units into “days of school” or “months of learning” for the practitioner audience, research suggests this can lead to erroneous interpretations or unreliable and improbable conclusions.
Translations could be manipulated to make an effect that is small in standard deviation units appear large. That is, an effect that is small in standard deviation units might be presented in days, weeks, or months of learning to make the intervention look good.
One study reported that compared with traditional public school students, charter school students’ performance is equivalent to 16 additional days of learning in reading and six days in math. But as pointed out by Tom Loveless, these are quite small differences when expressed in standard deviation units.
For that reason, I focus here on interpreting the standard deviation metric. If you see an effect size presented in “days of school” or “months of learning,” be aware that this could be misleading.
Why does “correlational, not causation” matter for effect sizes?
In studies designed to identify the causal effect of a program, effect sizes as low as 0.10 standard deviations are considered large (Kraft, 2019). This may come as a surprise to fans of Hattie’s Visible Learning, which argues that the “zone of desired effects” is 0.40 and above. But that benchmark is based on making no distinction between correlational and causation.
As noted in the previous post, the correlation between some program or practice and student outcomes can reflect a lot of different factors other than the impact of the program, such as student motivation. If we want to know whether the program causes a student outcome, we need a comparison group that:
- hasn’t yet received the program, and
- is similar to the group of students receiving the program.
The similarity of groups matters because any differences between groups offers an alternative explanation for the relationship between the program and student outcomes. For example, we would want both groups to have similar levels of academic motivation, because differences in motivation could explain differences in outcomes. Correlational studies can control for some characteristics of students that we can observe and measure, but they do not rule out all alternative explanations.
The R3I Method for reading a research paper recommends looking for certain keywords in the methods section to distinguish between correlation and causation. In studies designed to make causal inferences, the methods section will likely mention one or more of the following words: experiment, randomized controlled trial, random assignment, or quasi-experimental.
Look for a table that describes the students who receive the program and students not receiving the program. Particularly if the study is quasi-experimental, it’s important to know whether students are similar prior to participating in the program. For example, a study of a program implemented with 4th grade students might use 3rd grade standardized-test scores to assess whether the groups are similar. This helps rule out alternative explanations for the findings.
In “The Princess Bride,” Inigo Montoya says, “You keep using that word. I do not think it means what you think it means.” While effect sizes are influenced by many factors, distinguishing between correlation and causation is fundamental to a shared understanding of the meaning of the word “effect.” And that meaning has implications for effect-size benchmarks.

Why do effect-size benchmarks matter?
It’s not that I simply dislike effect sizes larger than 1.0. As noted by past contributors to EdWeek, “Holding educational research to greater standards of evidence will very likely mean the effect sizes that are reported will be smaller. But they will reflect reality.”
Confusing correlation and causation may lead decisionmakers to have unrealistic expectations for how much improvement a program can produce. These unrealistic expectations could leave educators disappointed and pessimistic about the potential for improvement. Education leaders may avoid implementing programs or stop programs with solid evidence of effectiveness because they perceive the potential improvement as too small.
Key takeaways
Questionable translations of research findings and presenting correlations as “effects” can mislead people about whether a program causes an impact on student outcomes. Here are three things to look for in different sections of a study.
- Methods: Does the study include a comparison group of students who did not receive the program or practice?
- Findings: Does the study describe the groups in the study and whether they looked similar prior to the program or practice being implemented?
- Results or technical appendix: Does the study include the effect size in standard deviation units?

Thanks to Cara for contributing her thoughts!
Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.
You can also contact me on Twitter at @Larryferlazzo.
Just a reminder; you can subscribe and receive updates from this blog via email. And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.