This editorial (originally entitled “Never Cry Wolf”) has been reprinted by permission from the School Science and Mathematics Journal, 103(2), pp. 61-65, © 2003 by School Science and Mathematics Assocation. All rights reserved.   Lederman, N. G. (2003). What works: A commentary on the nature of scientific research. [Reprint]. Contemporary Issues in Technology and Teacher Education [Online serial], 3(1).

What Works: A Commentary on the Nature of Scientific Research.

by Norman G. Lederman, Illinois Institute of Technology

Editor’s Note:

The U.S. Department of Education has mandated that future educational programs should be based on scientifically based research. The No Child Left Behind ( web site concludes that scientifically based research:

  • Is not subject to fads and fashions.
  • Makes teaching more effective, productive, and efficient.
  • Is less subject to political correctness.


The Department of Education has established a What Works Clearinghouse to “provide educators, policymakers, and the public with a central, independent, and trusted source of scientific evidence of what works in education” ( Draft standards defining “scientifically based research” have been posted, and feedback has been solicited.

This is a worthwhile discussion in any era. It is particularly relevant to the field of educational technology, which has the task of identifying emerging educational innovations that can enhance student learning. In the spirit of supporting and encouraging scholarly dialog on this topic, the editors of the CITE Journal commissioned an article by Norman Lederman addressing these issues.


Professor Lederman is a nationally recognized expert on scientific inquiry and the nature of science. Professor Lederman is chair of the Department of Mathematics and Science Education at the Illinois Institute of Education, past president of the National Association for Research in Science Teaching, and co-editor of the journal, School Science and Mathematics (in which this article was simultaneously published under the title, “Never Cry Wolf”).

We are pleased to publish his editorial, “What Works: A Commentary on the Nature of Scientific Research” in this issue of the CITE Journal.

Jerry Willis, founder of SITE, has contributed an associated commentary that amplifies and extends Professor Lederman’s remarks. Talbot Bielefeldt, manager of research and evaluation at the International Society for Technology in Education, offers a third perspective. Together, these commentaries shed light on the nature of science and scientific research and make an important and timely contribution to the national discussion of this topic.

We encourage other readers of the CITE Journal to respond as well. We will publish further commentaries that extend the scholarly dialog on this topic as we receive them.

The film Never Cry Wolf, inspired by Farley Mowatt’s novel, begins with some reminiscing by a young scientist (i.e., “Tyler”) about his travels to the Arctic to demonstrate that wolves had been preying on the caribou population. For years, we are told, the caribou population had been slowly and consistently decreasing, and the accepted view was that wolves were responsible. Unfortunately, definitive data did not exist in support of what everyone already knew, and Tyler had been asked (told) by his mentor to provide the necessary evidence to support a plan for reducing the wolf population.

The story continued with descriptions of all procedures to be used and necessary equipment Tyler would need to complete his task and, eventually, of his arrival in the frozen North. The scientific investigation required that the investigator “dispose” of several wolves and analyze the contents of their stomachs. Clearly, this would show that the wolves had been eating caribou. At first, no wolves were observed, let alone available for sacrifice, so Tyler began to study their “scat” for evidence of caribou remains. None were found.

When Tyler finally located a family group of wolves, his observations of their behavior supported the emerging notion that wolves were not getting their sustenance from caribou consumption. At this point, it became clear to the Tyler that the initial premise and design of his investigation were not adequate to solve the original problem. It was not at all clear what wolves were eating on a regular basis.

Tyler then embarked on an ingenious plan to observe all aspects of wolf behavior. He tried to integrate himself as much as possible into the wolves’ daily life in an effort to answer the question at hand. It was clear that the original plan would not work and so he chose to proceed with “what worked.” He did an excellent job of illustrating what is meant by participant observation research techniques.

To make a long story short, Tyler ultimately found that the wolves were living primarily on rodents, and when they did prey on caribou, it was on the weakest and most diseased members of the herd. In short, the wolves were actually enhancing the genetic pool of the herd and helping the future survival of caribou, as opposed to being a menace to their ultimate survival. The scientist’s change of plans worked. He was able to answer his research question. He found “what worked.”

The film illustrates the common course of science, although most people and even some scientists don’t realize it. Scientists pursue the answers to their questions in varied ways. These approaches differ within the various sciences and vary even more across the different sciences. No single set or sequence of steps exists that scientists follow in attempting to answer their questions of interest. There is no single scientific method that can be used to accurately characterize science or what scientists do. The questions guide the research approaches/design, and scientists, within certain limits, do what works.

Even the most casual of observers of environmental science can tell you that a classical pretest-posttest control group design (i.e., classical experiment) is not particularly effective if one is studying nature. Or consider the astronomer studying planetary motion or simply attempting to describe the atmosphere of a distant planet. Not too many opportunities for a classic experiment, are there? In reality, scientific research is as descriptive as it is experimental, and the design to collect data (and the form of the data) is more varied than homogenous. What I have just stated is misunderstood by most adults and K-12 students worldwide.

Emerging changes in policy regarding the funding and completion of valued educational research also speak of the importance of “what works.” But it is clear that those chanting the “what works” mantra are as confused as the general public and K-12 students about the meaning of scientific research.

Within the past few months there has been a seemingly endless stream of articles written by educators bemoaning a shift in policy toward the inclusion of what is labeled as “scientifically based” educational research. The November issue of Educational Researcher is dedicated to the topic, as is the National Research Council’s text, Scientific Research in Education (2002). The lightning rod of such concerns has become the $18.5 million contract awarded by the U.S. Department of Education for the development of the What Works Clearinghouse (WWC).

Although the mission of the WWC may seem harmless enough—”summarize evidence on the effectiveness of different programs, products, and strategies intended to enhance academic achievement and other important educational outcomes”—the underlying message is based on the same misconceptions that the public and our K-12 students have about science. The pundits, whether you want to associate them with political parties or not, are in favor of pursuing educational research of higher rigor and quality. The research would be such because it would follow the regimes of scientific investigations and, naturally, would provide results that were useful to the improvement of teaching and learning. The logic is simple. The reason we are in the educational mess we are in today is that the research guiding educational practice and policy is weak. We, they would say, really do not have any definitive evidence for what makes good practice in educational settings.

No doubt, representatives of the WWC would quickly point out that they really do understand scientific inquiry. They would probably refer to their discussion of scientific evidence which states, “We are interested in the range of methods that can determine the degree to which an intervention or approach has an impact on (or affects) education outcomes. Experimental and certain quasi-experimental research designs are most appropriate for this purpose.” Although in the discussion of “scientific evidence” they admit that, “many forms of research are relevant to education, and different forms of research serve different functions,” the appeal is ultimately to designs that can be used to establish cause.

The implication is clear. “Scientific evidence” is the kind of evidence that can be derived only from experimental and certain quasi-experimental designs. An inspection of the draft standards that WWC has developed for the evaluation of research investigations and research claims appear no different than what one would find in textbooks most commonly used for quantitative research design courses at the graduate level. It appears that individuals at the WWC are akin to those scientists who would consider biology a softer science as compared to the “hard” sciences of physics and chemistry. The illogical debate that sometimes occurs among scientists is that biology is not as scientific as the hard sciences because of the inability to control living organisms and biological systems adequately.

This all leads to the same position, the desire and necessity to control all variables so that definitive causal findings can be derived. And, of course, the only way to do this is by using the “Scientific Method” (i.e., classical pretest-posttest control group design). At least the WWC has made one compromise in response to the realities of instructional settings by admitting the use of certain quasi-experimental designs.

Although the current concerns are reminiscent of the quantitative/qualitative wars of several years ago, the battle lines of the current discussion inexorably revolve around what constitutes science. Even more narrowmindedly, the definition of science has been placed solely on the process of inquiry, necessarily adhering to a particular approach more commonly known as “THE Scientific Method.” Historians and philosophers of science, scientists, and science educators have long dismissed the scientific method as representative of how scientists approach questions of interest.

During the latter portion of the 20th century, when a more systematic and concerted effort was made in the study of teaching and learning, researchers borrowed from the same agricultural designs used by mainstream science. Perhaps the primary reason for this decision was the cultural status possessed by science. In any case, the history of educational researchers’ affair with strict quantitative research designs is well documented. Although this approach to research, which is virtually the same as that currently being advocated by the WWC, provided important foundational knowledge about teaching and learning, it was clearly limited. Indepth understandings of teachers’ thought processes and ways students mediate instructional experiences were not accessible through such means. In short, educators realized that many questions remained and new questions had arisen. In their search to find out “what works,” they needed to find research approaches that worked. The situation was really no different than what confronted Tyler and his wolves.

Researchers in all areas of education began to view classrooms as systems and cultures. They began to see the importance of the dynamic interactions among participants (i.e., teachers and students), as groups and as individuals. Borrowing from anthropology and sociology, educators began to research instruction from a totally different perspective than what was afforded by the “scientific” agricultural designs.

The situation is not much different than the shift from reductionist to systems thinking in environmental studies. As a consequence, there are few today who do not realize the difficulty in generalizing from one class to another (with the same teacher), let alone generalizing across teachers, schools, and states. Interestingly, classroom teachers have known this all along. Most teachers’ complaints about research findings that failed to resonate with their local situations were in response to rigorously quantitative studies that overgeneralized in deference to the sampling theory gods.

Misconceptions some have about the existence of a single scientific method aside, there are other problems with the application of classical experimental scientific research designs to classroom situations. It is absolutely critical, if one wants to imply cause, to carefully control or account for extraneous variables in research. There are problems, however, when you are dealing with situations involving living organisms that exhibit voluntary will and individuals that react differently, for a variety of reasons, to the same environmental conditions/stimuli.

There has been a history of attempts to conduct carefully controlled experiments in classroom settings. However, the situation becomes so contrived that little external validity can be ascribed to the investigation. Quite simply, the situation is so deviant from general classroom life across settings that attempts to generalize to other situations have become futile at best. Much of the research conducted in “laboratory schools” suffered from this problem. The research, in and of itself, was fine for the specific situation, but generalizing to other populations was difficult. Nevertheless, the WWC would like to pursue this path again, to the degree that they place little value on designs that do not attempt to make definitive causal claims.

The WWC claims to be moving educational practices toward a “medical model.” That is, educational practitioners are asked to seek the results of “scientifically controlled studies (like clinical trials)” to make instructional decisions. Most medical research, for ethical reasons, does not follow experimental research models. It is simply not acceptable to randomly solicit participants for an investigation and then randomly assign them to treatments, one of which has potential harm. In most cases, medical research involves ex-post facto designs (e.g., heart disease studies, smoking/cancer studies), which are correlational by nature.

Surely representatives of the WWC will now want to direct my attention to the plethora of experimental studies in medicine that involve human models (substitutes for humans in terms of physiology). Surely, they would say, all of the research done on various drugs and medicines began with experimental studies on rats or other mammals with the only inference being the similarities between the physiology of the human and the physiology of the animal being used as a model. In this case, my detractors would be absolutely correct.

However, there is a vast difference between generalizing results of experimental medical studies using human models and generalizing experimental studies in education, as WWC and the Department of Education want to do. The studies with drugs, medicine, etc., involve inanimate effects in the sense that what is involved is the interaction of various molecules within the physiological systems of the human or human model. In education we transcend the organic level and have to grapple with motivation, free will, emotions, attitudes, etc.

Certainly, inanimate factors influence all of these human characteristics, but virtually everyone interested in learning beyond a passing curiosity knows learning to be far more complex. I don’t know anyone who would currently assume that using a particular teaching approach with birds would generalize to human learning. Wasn’t this the problem that we all had with operant conditioning and the work of behaviorists. When it comes to complex thinking, human behavior is just not that simple. Or it is not simple enough to allow the high levels of predictability and generalizability desired by the WWC.

It is interesting to note that there was a period of time in recent history when experimental studies in learning involving human models was in vogue. Do you remember the investigations in which worms or rodents were taught certain skills and then were sacrificed and fed to other animals of the same type? This approach was entrenched in the belief that learning was organic and learning could be transferred through the transfer of organic material. Again, the medical model of experimental research only holds for investigations involving the inanimate, not such things as complex learning in humans.

By now you may have concluded that I must be a total relativist and would not admit to any progress in our knowledge of teaching or learning. Nothing could be further from the truth. I am a strong supporter of the value of both quantitative and qualitative research. I also believe that studies with small sample sizes can be as valuable as studies with large sample sizes. The most critical issue is the relationship among the research questions, research design, and the nature of the data collected.

Research questions should guide design and data choice. Researchers should pursue “what works,” and what works depends on the question being asked, not some idealized scientific method that is incorrectly purported to be the only method to produce scientific evidence. In addition, it is critically important that all researchers remain intimately aware of the assumptions embedded in their research questions, designs, and analyses and the implications these assumptions have for brute force generalizability to the rest of the world.

Overall, although the intentions of the WWC are admirable, the project is flawed for at least two critical reasons. The effort has a clear underlying (and sometimes not so subtle) belief that scientific evidence can only be provided by causal research designs (aka The Scientific Method) and that research findings from studies of teaching and learning can be generalized freely across contexts and situations if derived from studies following causal designs.

In our attempts to enhance teaching and learning from systematically collected empirical evidence let us never lose sight of the unpredictability and indeterminate nature of human behavior.

Norman G. Lederman
Illinois Institute of Technology

 696 total views,  3 views today