Adams, S. T. (2004). Commentary: Considerations in pedagogy and assessment in the use of computers to promote learning about scientific models. Contemporary Issues in Technology and Teacher Education [Online serial], 4(1). https://citejournal.org/volume-4/issue-1-04/science/commentary-considerations-in-pedagogy-and-assessment-in-the-use-of-computers-to-promote-learning-about-scientific-models

Commentary: Considerations in Pedagogy and Assessment in the Use of Computers to Promote Learning About Scientific Models

by Stephen T. Adams, California State University, Long Beach

Abstract

Although one role of computers in science education is to help students learn specific science concepts, computers are especially intriguing as a vehicle for fostering the development of epistemological knowledge about the nature of scientific knowledge—what it means to “know” in a scientific sense (diSessa, 1985). In this vein, the article by Cullin & Crawford (2003) investigated using computer modeling activities in the curriculum of a science methods course. Their goals, which transcended improving their students’ understanding of specific models, were aimed at improving their students’ appreciation of the nature of scientific modeling in general. This response to their article discusses their findings in relation to considerations pertaining to instruction and assessment in this area. Improving preservice teachers’ understanding of the nature of modeling in science is important in part because it supports a related goal of improving students’ understanding in this area. To further make the case for the value of an understanding of the nature of models in science, and as a complement to Cullin and Crawford’s discussion of teachers’ understanding of models, this response also discusses examples from a study of high school students’ interpretation of a scientific news report involving computer models.

Computer-based curricula about scientific models, such as the one reported on in the recent article in this journal by Cullin and Crawford (2003), offer promise as a means for fostering learning about scientific knowledge and what it means to “know” in a scientific sense (diSessa, 1985). The curricular unit Cullin and Crawford studied aimed to improve prospective science teachers’ appreciation of the nature and uses of scientific modeling.

This commentary discusses some pedagogical considerations in response to their findings. It also proposes another method for assessing students’ knowledge of modeling: how they interpret news reports of scientific findings derived from models. It draws examples from my own study of high school students’ interpretation of a news report, from Time magazine, involving computer models. This proposed assessment method not only fits in well with Cullin and Crawford’s article, it also serves to highlight “knowledge of modeling” as a critical area of scientific literacy.

Learning about computer models is clearly an important area, as attested to by science education standards documents (American Association for the Advancement of Science, 1993; National Research Council, 1996) and a recent volume of studies about models in the science curriculum (Gilbert & Boulter, 2000). Recognizing the need for further research on teacher preparation about modeling, Cullin and Crawford studied the results of using a curriculum (based on the dynamic modeling software Model-It) with preservice teachers in a science methods course. The curriculum involved the students in designing, building, and testing computer models. To assess students’ learning, they conducted interviews and administered questionnaires, using questions adapted from an earlier study by Grosslight, Unger, and Jay (1991).

Cullin and Crawford mentioned one of the levels of the three-level classification scheme about understanding models that Grosslight et al. developed. It is useful to review the full three-level scheme here as a way of delineating possible conceptions about models. In Grosslight’s scheme, a Level 1 understanding reflects a view of models as simple copies of reality, without an understanding of specific reasons why models are constructed the way they are. A Level 2 understanding incorporates an explicit understanding of the purposes of models and reflects a realization that models do not have to correspond exactly to what they are representing.

Finally, a Level 3 understanding incorporates three distinguishing characteristics: (a) models are seen as vehicles for developing and testing ideas rather than as copies of reality, (b) it is recognized that modelers select among different possible designs according to their purposes, and (c) it is understood that new ideas can be developed by manipulating and testing models, as part of an iterative constructive process. Grosslight et al. developed this scheme using an expert/novice paradigm involving research with students (both middle school and high school) and scientists. By and large, only the scientists exhibited the more sophisticated Level 3 understanding.

Cullin and Crawford had hoped to raise the awareness of their students about the role of modeling in scientific inquiry. Such a shift would have been evident had their participants exhibited a Level 3 understanding of models as a result of the activity. However, the students in their sample exhibited only a Level 2 understanding. Also, their participants (again, prospective teachers in a science methods course) tended to discuss models with reference to their role as pedagogical tools rather than their role in constructing scientific knowledge.

In other words, their participants tended to discuss models with reference to how they could help students learn particular concepts rather than as central constructs in a process of developing scientific knowledge. Although the prospective teachers began to use some new terms (e.g., “variables” and “relationships”), they seemed to have missed the authors’ pedagogical goals involving learning at an epistemological level.

Even so, “negative results” can be every bit as thought provoking and useful as positive ones, and their article provides a springboard for considering instructional and methodological approaches involving using and studying computer models in science education. What reasons might be considered for their experimental outcomes? First, as noted by the authors, the context of the activity is a possible factor. The participants were, after all, engaged in a methods course for science teachers, so one might reasonably expect that they would be attuned to pedagogical considerations in this context. Also, because there is no particular reason to suppose the students had well-developed beliefs in this area prior to the instructional activity, the role of context may be all the more salient.

A second consideration is how students viewed the modeling activities. Grosslight et al.’s Level 3 understanding reflects an emphasis on the ideas behind the models over the actual models themselves. Clearly, Cullin and Crawford’s students were involved in building, constructing, and verifying both qualitative and quantitative models. Future work might probe the degree to which students viewed the curriculum as emphasizing the ideas behind the models, as opposed to the model construction itself. In computer activities involving creating computer representations, students can sometimes focus on creating the representations rather than engaging with the ideas behind those representations (Adams, 2003).

A third point is that their participants were being asked general and abstract questions about computer models, questions that might well be difficult for persons who are relatively inexperienced with scientific models. One suggestion would be to provide concrete examples based on one or more specific issues. In previous research of mine about reasons for scientific disagreement, for example, posing questions in the context of a specific issue elicited conceptions that did not surface from more general probes (Adams, 2001).

A related consideration is that, in the Cullin and Crawford study, the authors seemed to be looking for a kind of “far transfer” from the activities with the Model-It software to scientific modeling in general. That is, the hope was that the participants would generalize from activities with the computer models to broader questions about scientific modeling. Future work might probe, less ambitiously, for a kind of “near transfer” that would be tied to models related to the ones the students constructed.

Based on the possibility that the context of learning about models in a science methods course would pull students towards a “pedagogical” view of scientific models, Cullin and Crawford also suggest that teacher candidates be engaged in experiences with modeling as part of their undergraduate science content courses. This would seem advisable, and in addition, a further exploration of possible reasons for the student views reported might also be productive. Cullin and Crawford’s study incorporated an instructional module that was only part of a single course. However, promoting a shift in students’ epistemological thinking is ambitious, and such a change might reasonably be expected to require the span of several courses (Reif & Larkin, 1991), or even perhaps some training in the philosophy of science.

Another suggestion would be to investigate whether a relationship exists between students’ views of models and their more general views about whether they see themselves as producers or consumers of knowledge. In other words, students who view themselves as capable of producing knowledge may be more open to viewing scientific models as tools for developing new knowledge. Conversely, students who tend to view themselves as consumers of knowledge may be more drawn to a view of computer models as tools for explaining concepts that others have discovered.

Cullin and Crawford’s work draws attention to some of the challenges associated with using computer models to teach about the nature of scientific modeling. Considerations in designing these educational experiences include (a) the role of the instructional context in shaping students’ views about the nature of models, (b) students’ views of the modeling activities (e.g., the extent to which they view a curriculum as being concerned with constructing models or considering the ideas behind them), (c) the level of abstraction in questions about models (e.g., whether students are given an example of a model to consider when asked to evaluate questions about modeling, in general), (d) the length of time needed for students’ understandings to shift, and (e) the extent to which the introduced ideas may transfer to other situations.

A related topic concerns the relationship of models to scientific literacy. My own research comparing high school students and scientists’ interpretation of media reports deals with further considerations in this area (Adams, 2002a). This research was from a larger body of work comparing high school students and scientists regarding their interpretation of information about climate change presented on the World Wide Web and other print sources (Adams, 1999, 2002b).

One of the news reports, from an article in Time magazine (Lemonick, 1995), specifically discussed computer models in describing how climate change was being more broadly accepted in the scientific community. Scientific progress in the development of computer models had strongly influenced the direction of a major report of the international scientific body studying climate change, the Intergovernmental Panel on Climate Change. Despite some limitations (including some sensationalism), the article gave a good account of this scientific development. In particular, the article traced how computer models became better matched to physical observations when they were improved by incorporating sulfate aerosols, a previously overlooked factor.

In discussing the article, the scientists demonstrated an understanding of models akin to Grosslight’s Level 3. They expressed the view that the computer models were useful, in spite of the uncertainties. For example, a doctoral candidate studying climate change laid out the limitations of these computer-based General Circulation Models (GCMs) as follows:

A lot of people criticize how the global climate models that we use to understand this are developed, that you can’t possibly capture the whole climate system with mathematics. And there are still so many uncertainties, or so many processes that we haven’t been able to model well enough to put into these GCMs, that we can’t talk about model predictions with very much confidence. That’s where a lot of the heaviest criticisms are aimed. And to a certain extent, some of the criticisms are things that should be listened to, because it is true that GCMs do have a lot of uncertainty, and you can’t capture everything in nature with math.

In spite of the uncertainty, she emphasized that these models were, by and large, trustworthy:

    But to a large extent we can trust the models, because they’ve been tested against simulating the present climate, and against things that have happened in past climates. So to a certain extent we can understand and trust the models, and the fact that there are different models developed by different groups who tell us similar kinds of things. Maybe the pattern of climate change is different in different models, but the general direction and magnitude is fairly similar.

The news report tended to be far more difficult for students to interpret. Upon learning that scientists changed the models in response to an overlooked factor, students (all 17 years old) tended to become suspicious of the models altogether. Certainly, it is appropriate to take a skeptical and critical stance toward findings derived from computer models. On the other hand, to say that the computer models do not mean anything at all reflects a misunderstanding of the progress of the development of scientific knowledge. Indeed, to say that, in order to be valid, a model must take everything into account reflects a view akin to a Level 1 understanding in Grosslight et al.’s scheme. A view that the computer models are of no use because they are undoubtedly missing things was expressed by a student as follows:

    This article made me be like, made me feel like, that the computer simulation didn’t mean anything, because there’s so many things that can go wrong. You know, they left out aerosols, and by doing that, that fixed the problem, and now it leads to the data that they expected to see. Well, what if they forgot something else? Then they could have totally different effects, and we could find that global warming doesn’t have an effect at all. And this article just made me feel that the computer model was really unnecessary, it just was—it didn’t make me go, “yay, computer model.”

In a similar vein, another student expressed a view that if a model was missing one thing it was undoubtedly missing other things:

    The fact that they left off, they left out aerosol, and that became a large factor in their research, like what the computer would think in the next 100 years, what’s gonna happen? That also means that they’re forgetting about some other stuff. I’m sure that they left out some other things.

These examples highlight “knowledge of scientific modeling” as an aspect of scientific literacy. They also illustrate that students’ interpretations of reports in the media of scientific developments involving models may be influenced by their beliefs about the role of models in science. High school graduates should have sufficient background to make informed judgments about reports in the media of scientific findings involving models. This, in turn, will require that their teachers also have such preparation.

Training and assessing teacher candidates in computer modeling, as Cullin and Crawford did, is a useful step. As discussed earlier, some considerations in designing these educational experiences include (a) the role of the instructional context in shaping students’ views about the nature of models, (b) students’ views of the modeling activities (e.g., the extent to which they view a curriculum as being concerned with constructing models or considering the ideas behind them), (c) the level of abstraction in questions about models (e.g., whether students are given an example of a model to consider when asked to evaluate questions about modeling in general), (d) the length of time needed for students’ understandings to shift, and (e) the extent to which the introduced ideas may transfer to other situations.

Further, a pragmatic test of progress in students’ knowledge of scientific modeling might include whether they are more successful in interpreting media reports of scientific developments involving models. Such an assessment has the advantage of being directly aligned with one of the goals scientific literacy. An understanding of computer models and the role they play in the development of scientific knowledge is not just an esoteric endeavor, it is an important tool for interpreting news of scientific developments in an increasingly complex and interdependent world.

References

Adams, S. (1999). Critiquing claims about global warming from the World Wide Web: A comparison of high school students and specialists. Bulletin of Science, Technology, & Society, 19(6), 539-543.

Adams, S. (2001). Views of the uncertainties of climate change: A comparison of high school students and specialists. Canadian Journal of Environmental Education, 6, 58-76.

Adams, S. (2002a, March). Formulating goals for scientific and information literacy: Case study of students’ and specialists’ evaluation of a news report concerning computer models of climate change. Paper presented at the Annual Meeting of the National Association for Science, Technology and Society, Baltimore, MD.

Adams, S. (2002b). Studies of how students and scientists evaluate scientific claims from the World Wide Web: A method for formulating goals for scientific literacy and critical information literacy. In N. V. L. Bizzo, C. S. Kawasaki, L. Ferracioli, & V. L. da Rosa (Eds.), Rethinking science and technology education to meet the demands of future generations in a changing world (Vol. 2, pp. 546-556). Sao Paulo, Brazil: International Organization for Science and Technology Education.

Adams, S. (2003). Investigation of the “Convince Me” computer environment as a tool for critical argumentation about public policy issues. Journal of Interactive Learning Environments, 14(3), 263-283.

American Association for the Advancement of Science. (1993). Benchmarks for scientific literacy. New York: Oxford University Press.

Cullin, M., & Crawford, B. A. (2003). Using technology to support prospective science teachers in learning and teaching about scientific models . Contemporary Issues in Technology and Teacher Education [Online serial], 2(4). Retrieved February 20, 2004, from https://citejournal.org/vol2/iss4/science/article1.cfm

diSessa, A. (1985). Learning about knowing. In E. L. Klein (Ed.), Children and computers: New directions for child development. San Francisco: Jossey-Bass.

Gilbert, J. K., & Boulter, C. (2000). Developing models in science education. Boston, MA: Kluwer Academic Publishers.

Grosslight, L., Unger, C., & Jay, E. (1991). Understanding models and their use in science: Conceptions of middle and high school students and experts. Journal of Research in Science Teaching, 28(9), 799-822.

Lemonick, M. (1995). Heading for Apocalypse? A new UN report says global warming is already under way-and the effects could be catastrophic. Time, 146(12).

National Research Council. (1996). National science education standards. Washington, DC: National Academy Press.

Reif, F., & Larkin, J. (1991). Cognition in scientific and everyday domains: Comparison and learning implications. Journal of Research in Science Teaching, 28(9), 733-760.

Acknowledgments

I wish to thank Teresa Chen, Ali Rezaei, and Alan Colburn for comments on an earlier draft.

Contact Information

Stephen T. Adams
California State University, Long Beach
[email protected]

Loading