Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Teaching Opinion

Correlation? Causation? Effect Sizes? What Should a Teacher Trust?

By Larry Ferlazzo — June 10, 2025 5 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
  • Save to favorites
  • Print

Today’s post is the third, and final, one in a series providing guidance to teachers on how to interpret education research.

Who Cares About Effect Sizes?

Cara Jackson currently serves as the president of the Association for Education Finance & Policy. She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland:

Education leaders need to know how much to expect a program or practice to improve student outcomes. Such information can inform decisions about what programs to invest in and which ones to stop, saving teachers’ time and energy for programs with the most potential.

In this post, I discuss what “effect sizes” are, why effect sizes from well-designed studies are not the same as correlational evidence, and why that matters.

What is an “effect size,” and how is it measured?

An effect size is a standardized measure of how large a difference or relationship is between groups. Researchers use standard deviation units to measure the difference. While researchers may translate the standard deviation units into “days of school” or “months of learning” for the practitioner audience, research suggests this can lead to erroneous interpretations or unreliable and improbable conclusions.

Translations could be manipulated to make an effect that is small in standard deviation units appear large. That is, an effect that is small in standard deviation units might be presented in days, weeks, or months of learning to make the intervention look good.

One study reported that compared with traditional public school students, charter school students’ performance is equivalent to 16 additional days of learning in reading and six days in math. But as pointed out by Tom Loveless, these are quite small differences when expressed in standard deviation units.

For that reason, I focus here on interpreting the standard deviation metric. If you see an effect size presented in “days of school” or “months of learning,” be aware that this could be misleading.

Why does “correlational, not causation” matter for effect sizes?

In studies designed to identify the causal effect of a program, effect sizes as low as 0.10 standard deviations are considered large (Kraft, 2019). This may come as a surprise to fans of Hattie’s Visible Learning, which argues that the “zone of desired effects” is 0.40 and above. But that benchmark is based on making no distinction between correlational and causation.

As noted in the previous post, the correlation between some program or practice and student outcomes can reflect a lot of different factors other than the impact of the program, such as student motivation. If we want to know whether the program causes a student outcome, we need a comparison group that:

  1. hasn’t yet received the program, and
  2. is similar to the group of students receiving the program.

The similarity of groups matters because any differences between groups offers an alternative explanation for the relationship between the program and student outcomes. For example, we would want both groups to have similar levels of academic motivation, because differences in motivation could explain differences in outcomes. Correlational studies can control for some characteristics of students that we can observe and measure, but they do not rule out all alternative explanations.

The R3I Method for reading a research paper recommends looking for certain keywords in the methods section to distinguish between correlation and causation. In studies designed to make causal inferences, the methods section will likely mention one or more of the following words: experiment, randomized controlled trial, random assignment, or quasi-experimental.

Look for a table that describes the students who receive the program and students not receiving the program. Particularly if the study is quasi-experimental, it’s important to know whether students are similar prior to participating in the program. For example, a study of a program implemented with 4th grade students might use 3rd grade standardized-test scores to assess whether the groups are similar. This helps rule out alternative explanations for the findings.

In “The Princess Bride,” Inigo Montoya says, “You keep using that word. I do not think it means what you think it means.” While effect sizes are influenced by many factors, distinguishing between correlation and causation is fundamental to a shared understanding of the meaning of the word “effect.” And that meaning has implications for effect-size benchmarks.

intheprincessbride

Why do effect-size benchmarks matter?

It’s not that I simply dislike effect sizes larger than 1.0. As noted by past contributors to EdWeek, “Holding educational research to greater standards of evidence will very likely mean the effect sizes that are reported will be smaller. But they will reflect reality.”

Confusing correlation and causation may lead decisionmakers to have unrealistic expectations for how much improvement a program can produce. These unrealistic expectations could leave educators disappointed and pessimistic about the potential for improvement. Education leaders may avoid implementing programs or stop programs with solid evidence of effectiveness because they perceive the potential improvement as too small.

Key takeaways

Questionable translations of research findings and presenting correlations as “effects” can mislead people about whether a program causes an impact on student outcomes. Here are three things to look for in different sections of a study.

  • Methods: Does the study include a comparison group of students who did not receive the program or practice?
  • Findings: Does the study describe the groups in the study and whether they looked similar prior to the program or practice being implemented?
  • Results or technical appendix: Does the study include the effect size in standard deviation units?
question

Thanks to Cara for contributing her thoughts!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at @Larryferlazzo.

Just a reminder; you can subscribe and receive updates from this blog via email. And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.

Related Tags:

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Assessment Webinar
Rethinking STEM Assessment: Strategies for Administrators
School and district leaders will explore strategies to enhance STEM assessment practices across their district, within schools and classrooms.
Content provided by Project Lead The Way
Federal Webinar Keeping Up with the Trump Administration's Latest K-12 Moves: Subscriber-Exclusive Quick Hit
EdWeek subscribers, join this 30-minute webinar to find out what the latest federal policy changes mean for K-12 education.
Artificial Intelligence Live Online Discussion A Seat at the Table: Math & Technology: Finding the Recipe for Student Success
How should we balance AI & math instruction? Join our discussion on preparing future-ready students.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Teaching Opinion Students Can Easily Fall for Dangerous Messaging. What Teachers Can Do
Bad feelings and alienation can plague young people. You can address those emotions in the classroom.
4 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week
Teaching Opinion Improve Your Teaching With These Easy-to-Prep Strategies
Educators share easy steps that teachers can take to bring about big differences in their instruction.
10 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week
Teaching Opinion Students Are Regularly Exposed to Sexist Content Online. What Should Teachers Do
Andrew Tate's messaging about the "manosphere" is just one example of the dangerous messages students are receiving.
5 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week
Teaching How Teachers Get Through the Final Weeks of the School Year
Teachers share their tips for ending the school year on a positive note.
1 min read
Young female teacher with a diverse group of elementary school students surrounding her as she points to some papers on the table.
iStock/Getty