IES Director Provides Update on the IES Version of LEED Standards

Reprinted from Message from IES Director

A while ago, I wrote to say that IES was working on developing research quality standards inspired by the famous LEED building standards. Our goal in developing these standards is to identify the domains and the core questions that, from the IES perspective, constitute the most important dimensions of high quality education research.

Paralleling LEED procedures, we intend to create tiers of excellence (Platinum, Gold, etc.). These award levels will be appearing on the What Works Clearinghouse. It is important to remember that these levels will be in addition to the evidence standards that make the WWC so crucially important.

Just as LEED pushes developers to build structures that meet high environmental standards in construction, our goal is to incentivize researchers to meet higher standards of education research.

Please remember that this is a work in progress. We have not finalized the list of key domains and core questions and we certainly have not developed rubrics needed to set award levels. That will be a multi-year process, but one that we have already begun. As noted in my earlier posting, these domains will inform our RFAs. Indeed, starting next year, some of the domains and questions noted below will be requirements and others will, at least for now, be recommendations.

I asked for suggested names for IES standards. My staff immediately rejected my attempt to call it the Math And Reading Knowledge awards (aka the MARK standard). Fiona Hollands of Teachers College suggested Standards for Excellence in Education Research (SEER)—and that is the winning suggestion.

Below are the core domains and some of the related questions in each domain that we are working on.

SEER Domains and Questions 

Register Studies. Did the researcher execute the research and analysis activities as originally proposed in a recognized study registry? Did the registration describe key elements of the study protocol, including a limited number of primary outcomes?  Are any deviations from those plans clearly documented and their rationale explained?  

Focus on Meaningful Outcomes. Does the intervention affect learning, achievement, or attainment outcomes that are broadly understood as important to student success? Are the gains large enough to matter? Do these gains persist over time?  Are potential differences in impact by key student characteristics explored? 

Identify Core Components. What are the core components of the intervention? Is there a clear description of how core components of the intervention are meant to affect outcomes? Do the results of the research help us understand which of those components may be most important in achieving impact? 

Analyze Cost. Did the researcher measure the cost of core components of the intervention relative to the control or comparison condition? Does that measure include all resources that might be needed to achieve similar impacts? 

Support Scaling Up. If an efficacy trial yielded positive effects, does the study consider the transferability of its findings to other settings or their generalizability to other populations of interest? If an effectiveness trial, are important new settings and populations included and variation in impacts examined? Among interventions where evidence of effectiveness is found, are opportunities for scaling the approach, including commercialization, discussed?

Document Implementation. Were data on the implementation of the intervention collected? Was variation between implementation and impact explored and discussed? Were key hurdles and needed resources identified? How hard will it be to implement the intervention in other venues?

We will keep you abreast as this effort continues. And as always we welcome comments.

Mark Schneider