{"id":903,"date":"2009-09-01T01:11:00","date_gmt":"2009-09-01T01:11:00","guid":{"rendered":"http:\/\/localhost:8888\/cite\/2016\/02\/09\/a-qualitative-approach-to-assessing-technological-pedagogical-content-knowledge\/"},"modified":"2016-06-04T01:45:11","modified_gmt":"2016-06-04T01:45:11","slug":"a-qualitative-approach-to-assessing-technological-pedagogical-content-knowledge","status":"publish","type":"post","link":"https:\/\/citejournal.org\/volume-9\/issue-4-09\/mathematics\/a-qualitative-approach-to-assessing-technological-pedagogical-content-knowledge","title":{"rendered":"A Qualitative Approach to Assessing Technological Pedagogical Content Knowledge"},"content":{"rendered":"

Technological tools have changed the landscape of mathematics teaching. In geometry, teachers can launch investigations by having students efficiently manipulate, measure, and analyze dynamic diagrams (Jackiw, 2001). The study of algebra can be facilitated by the graphing calculator\u2019s ability to produce multiple representations of functions to be compared and contrasted with one another (Fey, 1989). The process of statistical investigation can be supported by software with the ability to import data quickly from the Internet for analysis (Finzer, 2002). Various other technologies, like online discussion boards (Groth, 2008), spreadsheets (Alagic & Palenz, 2006), and even robots (Reece et al., 2005), have been discussed in terms of their potential to support students\u2019 mathematical learning.<\/p>\n

The rapid expansion of available technological tools has prompted scholarly discourse about how Shulman\u2019s (1987) construct of pedagogical content knowledge might be built upon to help describe the sort of knowledge teachers need for teaching with technology. Recently, the phrase \u201ctechnological pedagogical content knowledge\u201d (or technology, pedagogy, and content knowledge; TPACK) has been used to describe \u201can understanding that emerges from an interaction<\/em> of content, pedagogy, and technology knowledge\u201d (Koehler & Mishra, 2008, p. 17). Such a conceptualization emphasizes that TPACK is more than just the sum of its parts. It implies that teachers must engage with content, pedagogy, and technology in tandem to develop knowledge of how technology can help students learn specific mathematics concepts.<\/p>\n

The emergence of the TPACK construct presents a dilemma: How can teachers\u2019 acquisition of TPACK be assessed? Answering this question is necessary for determining the extent to which different teacher education programs and experiences foster the development of TPACK. Without viable assessment mechanisms, comparing approaches and making decisions about actions to take in teacher education is difficult.<\/p>\n

Pragmatic issues like program accreditation and grant evaluation also highlight the need for TPACK assessment. Accreditation agencies (e.g., National Council of Teachers of Mathematics & National Council for Accreditation of Teacher Education, 2005) have begun to ask for data about prospective teachers\u2019 abilities to utilize technology in mathematics instruction, and agencies that provide funds to purchase classroom technology often want assessment data on how teachers are using the technology purchased. Hence, the development of TPACK assessment mechanisms is vital for helping build the infrastructure for current and future teacher education efforts.<\/p>\n

Two Contrasting Paradigms for the Assessment of Teachers\u2019 Knowledge<\/p>\n

One current paradigm for assessing mathematics teachers\u2019 knowledge is primarily quantitative and psychometric in nature. The Learning Mathematics for Teaching (LMT) Project at the University of Michigan exemplifies such an approach (Hill, Schilling, & Ball, 2004). University faculty work to produce forced-response test items meant to measure mathematical knowledge necessary for teaching. The items are field-tested and sorted based on their psychometric properties. Ultimately, scales of items are constructed and disseminated to individuals seeking to measure the effect of teacher education programs on mathematics teachers\u2019 knowledge acquisition. One of the chief advantages to the psychometric approach is that it produces sets of items that can be administered relatively quickly to teachers. The process of refining items and scales in response to empirical data has also contributed to theory construction and refinement about the types of knowledge needed to teach mathematics (Hill, Ball, & Schilling, 2008).<\/p>\n

Another current paradigm for assessing mathematics teachers\u2019 knowledge is primarily qualitative in nature and draws upon case descriptions of teachers\u2019 classroom practices. Simon and Tzur\u2019s (1999) idea of generating \u201caccounts of practice\u201d is rooted in such a paradigm. In generating accounts of practice, researchers study teachers\u2019 classroom practices through the lenses of conceptual frameworks that identify important theoretical constructs for attention. The conceptual frameworks may be revisited in response to observational data. Ultimately, accounts of teachers\u2019 practices are built that \u201ccan portray the complex interrelationships among different aspects of teachers\u2019 knowledge and their relationships to teaching\u201d (p. 263). One of the primary advantages to such an approach is that it is flexible enough to allow the researcher to focus on aspects of teachers\u2019 knowledge that may have not been identified a priori in the conceptual framework. It also allows for the exploration of contextual factors that contribute to the knowledge that teachers exhibit in their classrooms.<\/p>\n

The Assessment Framework<\/p>\n

The approach to assessing TPACK described in this paper is in the tradition of the qualitative \u201caccounts of practice\u201d paradigm. It grew out of a lesson study (Lewis, 2002) professional development project. A lesson study cycle involves having a group of teachers collaboratively construct a lesson on a shared learning goal for students, implement it, observe the implementation, and then debrief on the strengths and weaknesses of the lesson. The debriefing may lead to another lesson study cycle in which teachers continue to refine their approach to teaching the chosen concept. Stigler and Hiebert (1999) drew attention to lesson study as a model of professional development in their comparison of mathematics education in the U.S. and Japan. They noted that teachers in the U.S. tend to work in relative isolation from one another when compared to Japanese teachers.<\/p>\n

Whereas Japanese teachers regularly meet to plan, observe, and debrief on lessons, U.S. teachers generally do not. Hiebert and Stigler (2000) hypothesized that such differences in professional development contributed to achievement differences between students in the U.S. and Japan and suggested that lesson study be implemented in the U.S.<\/p>\n

The lesson study process can generate a substantial amount of qualitative data for analysis, as illustrated in Figure 1. The rectangles in Figure 1 represent the phases in a lesson study cycle. Arrows between the rectangles indicate the progression that occurs from one phase to the next. The arrow from phase 5 (debriefing) to phase 1 (planning) shows that a debriefing session may spark a new cycle. The dashed lines extend to the qualitative data produced at each phase.<\/p>\n

Arrows extending from the qualitative data sources indicate that the qualitative data can be assembled into a case study database (Yin, 2003), from which inferences about teachers\u2019 collective TPACK are drawn. Further details about each phase in the process and how the process can be used to assess TPACK are provided in the remainder of this section. The assessment model built on this process will be referred to as the Lesson Study TPACK (LS-TPACK) model.<\/p>\n

As indicated in Figure 1, the first step in the LS-TPACK model is that teachers collaboratively construct a lesson that incorporates technology in a school-based lesson study group (LSG). The type of technology and the learning goals for the lesson are not dictated to them by university personnel. Instead, teachers choose learning goals and accompanying technology for the lesson by identifying problematic concepts to address in collaboration with one another (Lewis & Tsuchida, 1998). Once learning goals and pertinent technology have been identified, the teachers use a four-column lesson plan format (Curcio, 2002) to write a lesson to be implemented by one of the members of the LSG. The four-column format the teachers are to use is shown in Figure 2. The primary goals of using the four-column format are to draw teachers\u2019 attention toward matching instructional activities with students\u2019 perceived learning needs and assessing students\u2019 progress toward learning goals.<\/p>\n

\"Figure<\/a><\/p>\n

Figure 1. <\/strong>The LS-TPACK assessment framework. (Click on image for larger version.)<\/a><\/p>\n

 <\/p>\n

\"Figure<\/p>\n

Figure 2.<\/strong> Four-column lesson plan format.<\/p>\n

 <\/p>\n

After the LSG writes the four-column lesson, it is sent to university faculty for review. Teachers also submit ancillary materials like worksheets and handouts to be used during the lesson. University faculty members are chosen to review the lesson based upon their teaching and research interests. The faculty reviews are solicited because previous research has illustrated that outside perspectives on the work of a LSG can help identify pedagogical and content-related weaknesses in lessons (Fernandez, 2005).<\/p>\n

Reviewers are asked to comment on questions at three levels of specificity, as shown in Figure 3. The inclusion of the three levels of specificity resonates with Lee and Hollebrands\u2019 (2008) observation that TPACK can be conceptualized as technological and pedagogical knowledge nested within content knowledge.<\/p>\n

\"Figure<\/a><\/p>\n

Figure 3.<\/strong> Questions reviewers are given to evaluate LSG lessons. (Click on image for an enlarged version.<\/a>)<\/p>\n

 <\/p>\n

Within 2 weeks of the submission of the initial four-column lesson, university faculty feedback is sent to the LSG. The LSG is then left to decide which feedback will be used to refine the written lesson before it is implemented. At this point, university faculty members are involved in the planning of the lesson only if the LSG requests their help. This minimally invasive stance is taken to allow teachers to reflect on which pieces of feedback are feasible to build into the lesson and which ones are not.<\/p>\n

Once the lesson has been refined based on reviewers\u2019 feedback and teachers\u2019 judgment, one member of the LSG teaches it. Another LSG member serves as videographer. The video is later viewed by all members of the LSG along with university faculty. Although it is often ideal to have all of the LSG teachers present in the room when the lesson is implemented (Lewis, 2002), video is used as a sharing mechanism to overcome obstacles associated with coordinating the schedules of all of the LSG members and university faculty members during the school day.<\/p>\n

LSG members and university faculty view the lesson video together during a debriefing session at Step 5 in the lesson study cycle. To begin the debriefing session, the teacher who implemented the lesson and the videographer are asked to provide any contextual information that may help explain what will be observed in the video. After this information is shared, the video is played, and debriefing session participants are asked to take notes on perceived strengths and weaknesses of the lesson.<\/p>\n

When the video is over, individuals participating in the meeting each share their perceptions of the strengths and weaknesses of the lesson. Initially, each person shares one perceived strength and one perceived weakness. The teacher who taught the lesson goes first during this portion of the session, and university faculty members go last. A more unstructured conversation occurs after each debriefing participant has shared perceived strengths and weaknesses. During the unstructured conversation, discourse may turn toward goals for the next, related lesson study cycle.<\/p>\n

As shown in Figure 1, the LS-TPACK process produces qualitative data that are assembled into a case study database (Yin, 2003). The initial LSG four-column lessons and ancillary materials comprise part of the database. The lesson reviews written by university faculty comprise another part. Transcripts of the implemented, videotaped lessons and transcripts of debriefing sessions are also included.<\/p>\n

To draw inferences about the collective TPACK of the LSG from the case study database, university faculty comments about teachers\u2019 use of technology are compiled from the initial written reviews and the observations made during the debriefing session. These written and verbal observations are then compared against the LSG\u2019s implemented lesson and teachers\u2019 debriefing session comments. The comparison process is used by the project principal investigator to draw inferences about the nature of the teachers\u2019 TPACK.<\/p>\n

University faculty members who reviewed lessons and participated in debriefing sessions are asked to validate the inferences by revisiting, as necessary, the LSG\u2019s initial lesson, the reviews they gave it, and the transcripts of the written lessons and debriefing sessions. The purpose of the validation step is to help ensure the production of trustworthy inferences (Cobb, 2000) about teachers\u2019 TPACK.<\/p>\n

An Application of the Assessment Framework<\/p>\n

In one instance, the LS-TPACK framework was used to assess an LSG\u2019s TPACK related to teaching systems of equations using graphing calculators. Two lesson study cycles related to this topic occurred within one academic year. The first cycle dealt with constructing a lesson for the general algebra I population at the school, and the second cycle\u2019s lesson was for a group of algebra I students the LSG considered to be more advanced.<\/p>\n

The first author drew inferences about the teachers\u2019 TPACK by examining the case study database for these cycles, as described earlier. The inferences were then validated and refined when necessary in consultation with the second, third, and fourth authors of the paper, who served as university faculty reviewers and debriefers during the lesson study cycles. Three of the most salient TPACK inferences drawn and validated are offered in the next section. The three inferences helped form the foundation for future work with the LSG by identifying TPACK elements in need of further development.<\/p>\n

Inference 1<\/em>: LSG members needed to develop knowledge of how to use the graphing calculator as a means for efficiently comparing multiple representations and solution strategies<\/em><\/p>\n

The first lesson implemented by the LSG involved solving systems of linear equations presented in word problems. The university faculty member reviewing the initial written lesson commented on the need to have students use the graphing calculator to make connections among the algebraic, tabular, and graphical representations of functions. Initially, the lesson called only for students to solve systems using their \u201cpreferred method.\u201d The reviewers felt that using the calculator to make the connections among the representations would help students understand one solution strategy in terms of another and develop the capacity to make informed choices about which representation to use in solving a given problem.<\/p>\n

In implementing the lesson, the LSG largely stayed with the idea of not pushing students beyond their individual preferred methods for solving systems of equations. During the debriefing session, they gave a variety of reasons for taking this course of action. The student population was cited as one contributing factor. Teachers noted that the class in which the lesson was implemented contained some special education students. They believed that the special education students would not be capable of understanding algebraic representations for the problems. Teachers also said that the standardized test given by the state did not require students to exhibit knowledge of multiple representations of functions \u2013 it required only that students generate a solution. Finally, time constraints of the lesson were cited as a reason not to delve into multiple representations. The school had shortened its class periods to 42 minutes, and the LSG felt that this did not provide adequate time to explore multiple representations of functions.<\/p>\n

The LSG\u2019s second lesson was written for an algebra class that contained only students who had been identified as academically strong. The main idea developed in the lesson was how to use the matrix multiplication capabilities of the graphing calculator to solve systems of equations. In reading the initial written lesson, reviewers again stated that using only one function on the calculator to produce an answer was not mathematically rich. In particular, they noted that the calculator was not used to compare and contrast representations and solution strategies for systems of equations.<\/p>\n

Teachers again attributed this missing element of the lesson to lack of time during the class period and the types of questions students would have to answer on the state\u2019s standardized test. Unlike the first lesson, however, student ability was not cited as one of the reasons for not delving into connections among multiple representations.<\/p>\n

Comparing the first lesson to the second, the two entrenched reasons for not using the graphing calculator to make connections among representations and solution strategies were perceived time constraints and the fact that students would not be explicitly asked to make such comparisons on the state\u2019s standardized test. This observation led to a hypothesis: Teachers needed a vision of how using the calculator to make comparisons among representations and solution strategies would ultimately be more time-efficient by helping build student understanding and, hence, student capacity to solve items on the state\u2019s standardized test with a higher degree of success.<\/p>\n

Sharing classroom activities with the potential to do so (e.g., Burke, Erickson, Lott, & Obert, 2001) thus became a goal for future work with the LSG. University faculty members reviewing the lesson also noted that understanding what a given representation shows about a system is valuable mathematically, regardless of whether or not it is explicitly stated on the guidelines for the state\u2019s standardized test. Such \u201crepresentational fluency\u201d (Zbiek, Heid, Blume, & Dick, 2007) can be considered an important learning goal in and of itself.<\/p>\n

Inference 2: LSG members needed to develop knowledge of how to avoid portraying graphing calculators as black boxes.<\/em><\/p>\n

Although graphing calculators open up new learning experiences like the ability to generate and compare multiple representations efficiently, they also pose a pedagogical dilemma: When should students be allowed to use the technology, and when should they use paper and pencil? Buchberger (1989) posited a solution to this pedagogical dilemma by forwarding the White Box\/Black Box Principle. The principle asserts that when an area of mathematics is new to students, hand calculations are important for building understanding of the concepts being studied. When the hand calculations become routine, and in some cases cumbersome, students should use the capabilities of the technology to facilitate problem-solving.<\/p>\n

Doerr and Zangor (2000) posited a slightly different view of black box uses of calculators. They acknowledged that in some situations black box use was detrimental to helping students develop mathematical understanding, but also gave examples of classroom situations where students can use graphing calculators to make sense of mathematical concepts before doing hand computations. For instance, using calculators to test conjectures can lead to mathematically rich discussions when students are asked to compare calculator output to predicted results. Hence, the overall context of a lesson and its goals need to be taken into account in order to distinguish between potentially harmful and potentially productive \u201cblack box\u201d uses of calculators.<\/p>\n

Concerns about potentially harmful black box uses of technology arose in connection with the LSG\u2019s second lesson. The lesson introduced the idea that matrices can be used to solve systems of equations. Students were to set up systems of equations for situations presented in word problems and put the systems in matrix form. The students were then told to \u201ctake the inverse of the first matrix and multiply it by the \u2018answer\u2019 matrix.\u201d Finally, students were to use the calculator to determine the product of the two matrices. They were told that the product was the solution to the system of equations. So, for example, solving the system of equations 4x<\/em> \u2013 3y<\/em> = 15; 8x<\/em> + 2y<\/em> = -10 was portrayed as consisting of the following sequence of steps:<\/p>\n