Bielefeldt, T. (2003). Commentary: Response to Lederman’s “What Works” Contemporary Issues in Technology and Teacher Education [Online serial], 3(1). https://citejournal.org/volume-3/issue-1-03/editorial/commentary-response-to-ledermans-what-works

Commentary: Response to Lederman’s “What Works”

by Talbot Bielefeldt, Center for Applied Research in Educational Technology, International Society for Technology in Education

Norm Lederman’s (2003) editorial, “What Works: Commentary on the Nature of Scientific Research,” makes the points that different research questions call for different types of methodologies and that the current emphasis on defining “what works” by randomized experiments—while important for making causal inferences about certain interventions—is not “what works” for all research issues.

I would like to raise an additional concern; namely, that even within the realm of experimentation, qualitative observation and implementation studies are essential to establishing the educational significance (as opposed to statistical significance) of findings (Cradler, in press).

I am the program administrator for the Center for Applied Research in Educational Technology (CARET), a Bill & Melinda Gates Foundation-funded project of the International Society for Technology in Education (ISTE) and Educational Support Systems (ESS). CARET provides a web site (http://caret.iste.org) of reviewed research studies and short literature reviews focused on critical questions in educational technology.

When CARET was begun in 2000, ISTE, ESS, and the Gates Foundation were well aware of the issues around evidence for the effectiveness of educational technology. In 1999 the Foundation had contracted for a comprehensive literature review (Fouts, 2000) that had turned up relatively few experimental studies. Within CARET, formal research makes up less than 9% of our reading list.

However, CARET’s mission is to support educational decision making, and our experience has been that a lot of educational decisions have to be made in the short term on “best evidence available,” whether or not that evidence is experimental. This is particularly true in the case of emerging technologies, where clearly defined “treatments” may not yet exist. To help educators interpret the extent to which they can derive implications for action from any particular study, we spent most of the first year of the CARET grant hammering out a rubric for critiquing material in four general areas:

  • Position papers (that is, expert opinions)
  • Observational studies (including both qualitative case studies and quantitative surveys)
  • Formal evaluations (not a methodology in itself, but a focus on particular programs, often with a mixed-method approach).
  • Formal research studies (experimental and quasi-experimental)

In trying to determine the extent to which studies have implications for educational decisions, we have found challenges with all types of articles. In the case of experimental studies, the central question is whether the researchers have described what they studied in enough detail that educators can reasonably apply the findings to their own situations. Unfortunately, the answer is usually no—studies often fail to precisely define the conditions of the intervention, the control group, or both.

Thus we may know that computer-integrated approach X has a certain effect size compared to a control group, but unless we know exactly how X was delivered and what the control group did, it is difficult for us to predict whether X will have the same effect in comparison to any other particular educational option. And if the decision at hand hinges on cost and level of effort, we may also need to know how much professional development was required for each condition, what resources were required for full implementation, and what hurdles had to be overcome.

My point, then, is not that we shouldn’t give extra attention to experimental research designs. I believe we should. However, other approaches—including classroom observation and implementation studies—are necessary in order to produce research findings that can translate into action for school improvement.

References

Cradler, J. (in press). Making educational technology relevant. Learning & Leading With Technology.

Fouts, J. (2000). Research on computers and education: Past, present, and future. Seattle, WA: Bill & Melinda Gates Foundation. Retrieved March 3, 2003, from: http://www.gatesfoundation.org/education/researchandevaluation/
computerresearchsummary.pdf

Talbot Bielefeldt
Center for Applied Research in Educational Technology
International Society for Technology in Education
[email protected]

Loading