{"id":849,"date":"2007-09-01T01:11:00","date_gmt":"2007-09-01T01:11:00","guid":{"rendered":"http:\/\/localhost:8888\/cite\/2016\/02\/09\/developing-preservice-literacy-teachers-observation-skillstwo-stories-two-technologies\/"},"modified":"2016-06-04T01:37:06","modified_gmt":"2016-06-04T01:37:06","slug":"developing-preservice-literacy-teachers-observation-skillstwo-stories-two-technologies","status":"publish","type":"post","link":"https:\/\/citejournal.org\/volume-7\/issue-4-07\/general\/developing-preservice-literacy-teachers-observation-skillstwo-stories-two-technologies","title":{"rendered":"Developing Preservice Literacy Teachers\u2019 Observation Skills:Two Stories, Two Technologies"},"content":{"rendered":"
Investigations into what distinguishes an expert from a novice consistently indicate that experts are able to observe and identify patterns significant to their field. For example, when expert chess players see a chess board, they can recognize salient information, identify patterns, and makes sophisticated decisions about what chess piece to move next (Chase & Simon, 1973; deGroot, 1965). Similar abilities have been found, among others, with expert physicians, electronics technicians, physicists, architects, baseball players, and teachers (Berliner, 1986; Bransford, Franks, Vye, & Sherwood, 1989; Carter, Sabers, Cushing, Pinnegar, & Berliner 1987; Chase & Chi, 1980; Egan & Schwartz, 1979; Leinhart & Greeno, 1986; Livingston & Borko, 1989; Norman, Jacoby, Feightner, & Campbell, 1979).<\/p>\n
When expert literacy teachers observe a child reading, they are able to identify salient information about the child\u2019s reading development and make instructional decisions that are developmentally appropriate for that child. In contrast, when preservice literacy teachers observe a child reading, they are likely to state that the child is a \u201cgood\u201d or \u201cbad\u201d reader\u2014with little evidence or ability to explain what makes this child\u2019s ability good or bad. One of the challenges in teacher education, therefore, is to help preservice teachers develop observation skills that allow them to identify salient information about a child\u2019s literacy development so they can make appropriate instructional decisions. This ability is referred to as \u201csystematic observation.\u201d<\/p>\n
The purpose of this report is to tell two stories of two technologies used by teacher educators to address the challenge of developing systematic observation among preservice literacy teachers. These stories include descriptions of each technology and the results of sequential mixed methods studies used to examine the preservice teachers’ development of systematic observation The importance of systematic observation to literacy education is discussed, followed by a discussion of challenges to helping preservice teachers develop systematic observation skills. Finally, two types of technologies and reasons they are being used to address some of these challenges are described.<\/p>\n
Systematic Observation<\/p>\n
Standardized and criterion-referenced assessments are typically the means by which students’ literacy achievement is measured on national, state, and local levels. Such assessments provide political leaders, school district administrators, and parents with information that allows comparisons of students\u2019 literacy achievement in relation to larger populations or defined levels (i.e., basic, proficient, or advanced). These assessments, however, often denote isolated skill achievement, may not reflect the curriculum taught in the classroom, and focus the teacher’s attention on teaching to the test rather than on meeting individual students\u2019 literacy needs. As a result, standardized and criterion referenced assessments do not reveal individual students\u2019 growth and needs in language and literacy learning; they do not inform instructional decisions, nor do they indicate the learning occuring on a day-to-day or week-to-week basis (Wilde, 1996).<\/p>\n
In order to overcome the limitations of standardized and criterion-referenced assessments, preservice teachers must learn to apply assessment skills that examine individual children\u2019s literacy development on an ongoing and regular basis. Assessment skills, such as systematic observation, provide literacy teachers with the information needed for meeting all students\u2019 literacy needs and for daily instructional decision making.<\/p>\n
Literacy teachers can use systematic observation to assess how a child works on a literacy task. For example, through observation, information can be learned about a student\u2019s (a) competencies and confusions, (b) strengths and weaknesses, (c) processes and strategies, and (d) understandings of literacy processes (Clay 1993). Such information should be regularly recorded and used to scaffold (Bruner, 1978) and maximize each student\u2019s literacy growth.<\/p>\n
The literature is replete with studies and discussions of systematic observation. Clay (1993) described systematic observation from the standpoint of teachers using a variety of observation assessments to examine specific elements of young children\u2019s literacy growth. Clay described the Observation Survey (i.e., running records, letter identification, concepts about print, word tests, writing, and hearing sounds) that emphasizes the operations and strategies used in early reading development. Careful record keeping of the child\u2019s performance allows a teacher to tailor instruction so that the child acquires new literacy strategies and transfers their use from one situation to another.<\/p>\n
Goodman (see Wilde, 1996) elaborated on systematic observation as a regular assessment practice and used the term \u201ckidwatching\u201d to describe the means by which teachers explore a child\u2019s language development and knowledge of language. Kidwatching reveals up-to-date information of a child\u2019s knowledge about language, as well as the role miscues play in language development. Goodman argued that the analysis of miscues provides the basis for discovering a child\u2019s knowledge of reading and language. Miscue analysis allows teachers to examine a child\u2019s successful and unsuccessful use of literacy strategies and then provide the support necessary for the further development. As children try to make sense of and organize their knowledge, the teacher, through observation, can make instructional decisions that help children rethink and build their understanding of language and literacy.<\/p>\n
Leu and Kinzer (1999) described a framework useful for teachers’ understanding of children\u2019s reading processes. This framework lists the following components of reading: decoding knowledge, vocabulary knowledge, syntactic knowledge, discourse knowledge, metacognitive knowledge, automaticity, and affect. Although a variety of models commonly address similar components in various ways (see Ruddell & Unrau, 2004), it is argued that systematic observation would include information about each component. In other words, a literacy teacher should observe and document the entire range<\/em> of reading components.<\/p>\n In addition, teachers engaged in systematic observation of children’s reading abilities should substantiate their observations with examples<\/em> of what a child says or does with regard to each reading component. Teachers commonly use examples of children\u2019s reading to document growth, reconsider their conceptions of children\u2019s growth, and conduct conferences with the children, parents, and principal (Anderson, 2000). Given examples, the child, parents, and principal may be able to contribute confirming as well as disconfirming examples that, in turn, help all involved to understand the child\u2019s reading progress. Teachers should be able to cite examples to demonstrate that their evaluation of a child is justified. Finally, systematic observation of reading involves not just a range or examples but also involves pulling these examples together for an overall understanding of a child’s reading abilities.<\/p>\n In summary, systematic observation has been described as an assessment process that enables teachers to understand children\u2019s literacy knowledge in terms of what they know and what could be taught. Researchers offer several conclusions that help to focus a teacher\u2019s understanding during systematic observation: (a) meaning is actively constructed during reading, (b) errors inform teachers about reader\u2019s reading development and about how they interpret text,(c) readers use cueing systems and strategies for constructing meaning as they read, (d) all readers use reading strategies and cueing systems in similar ways to construct meaning, and (e) differences in reader\u2019s background (i.e., culture, experience, language) influence meaning construction (Marek & Goodman, 1985). These conclusions are clearly evident in the systematic observation concepts just described.<\/p>\n Challenges Encountered in Teacher Education<\/p>\n Although systematic observation is a valuable skill to literacy teachers, teacher educators have a variety of challenges to overcome in order to help preservice literacy teachers develop such skills. For example, one strategy that teacher educators can use is to model systematic observation. Obviously, modeling requires that teacher educators, preservice teachers, and children be present simultaneously. This is a challenge because teacher preparation programs do not typically have school-aged children on campus. A teacher educator can arrange to observe in an elementary classroom with preservice teachers and thereby model systematic observation. It is, however, cumbersome for 20-40 preservice teachers to observe in one classroom of 20 or so children. This strategy, therefore, is problematic.<\/p>\n Teacher educators can ask cooperating teachers to model systematic observation during field experiences. However, unless there is consistent communication between the cooperating teachers and teacher educators, the cooperating teachers may not be modeling the systematic observation foci and strategies that the preservice teachers are learning in their teacher education courses. Further, because preservice teachers are commonly placed in several different schools for field experiences, a teacher educator may need to maintain consistent communication with 10-40 cooperating teachers (depending on the number of preservice teachers in a course and the ratio between cooperating teachers and preservice teachers). Maintaining the necessary communication with the numerous cooperating teachers can be a daunting challenge for the teacher educator.<\/p>\n Another alternative is for teacher educators to supervise preservice teachers\u2019 field experiences and model systematic observation. However, because preservice teachers work in different classrooms (often in different schools), teacher educators may be able to travel only a handful of times during a semester to multiple field sites to observe each preservice teacher and model systematic observation.<\/p>\n This paper tells the stories of two technologies being used to address the aforementioned challenges. Story 1 involves the use of multimedia cases from a CD-based series entitled Children As Literacy Kases (ChALK, see http:\/\/web.missouri.edu\/~umccoechalk\/<\/a>). Story 2 involves the use of video vignettes that demonstrate systematic observations. These technologies focused on individual children as they read and wrote in the classroom during literature, math, science, and social studies instruction. The instructors selected these technologies to address the aforementioned challenges. Specifically, using ChALK and the selected videos allowed the instructors to discuss systematic observation with preservice teachers without overwhelming a classroom of students with 20+ observers. Using these technologies allowed the instructors to model systematic observation without traveling to each preservice teacher’s field school. Because modeling could occur during university courses, the instructors did not have to ask cooperating teachers to model systematic observation. Herein, the modeling could be aligned with the courses and address specific strategies that may otherwise not be demonstrated during the limited time that preservice teachers are in the field.<\/p>\n Potential for Improving Teacher Education: Reasons to Use Technology<\/p>\n In this section is discussed a variety of reasons literacy teacher educators are turning to technology to improve teacher education. The two technologies described in this study were used for sociocultural reasons. Specifically, our culture is increasingly technological. Technology was used in these courses so that preservice teachers could see examples of how to incorporate technology into classroom settings. The National Center for Education Statistics (NCES; U. S. Department of Education, 2005) reported that in 2004 nearly 100% of U. S. public schools had Internet access. On the other hand, in 2001 only 33% of teachers reported being well-prepared to use computers or the Internet in their classrooms. The U.S. Department of Education concluded that this lack of preparation often resulted in teachers not using the technology available in classrooms.<\/p>\n The U. S. Congress funded an initiative (Preparing Tomorrow’s Teachers to Use Technology, n.d.) to help teacher educators incorporate technology into their instruction and, thereby, help teachers feel more comfortable with technology and instruction with technology. Similarly, the National Council for Accreditation of Teacher Education (NCATE) determined that teacher education has an important role to play in helping K-12 teachers use technology in their classrooms. In 2001, NCATE added technology standards to their evaluation of teacher education programs (see http:\/\/www.ncate.org<\/a>). The multimedia cases and video vignettes that are described in this study allowed preservice teachers to experience educational technologies.<\/p>\n The two technologies were used for cognitive reasons. Specifically, each technology was used to overcome inert knowledge (Bereiter, & Scardamalia, 1985; Bransford, 1989; Whitehead, 1929) regarding systematic observation by contextualizing observations via multimedia or video. There are anecdotal stories (e.g., Silverman and Welty, 1992), books (e.g., Atwell, 1987; Avery, 1993; Harp, 1993, Routman, 1994), and multimedia products (e.g., CTELL, see Teale, Leu, Labbo, & Kinzer, 2002, and Reading Classroom Explorer, see Hughes, Packard, & Pearson, 2000a, b) that allow users to learn about ways to teach children to read and write. In other words, several instructional materials are available that allow preservice teachers to understand different ways to set up a literacy curriculum (e.g., Alverez et al., 2005). In contrast, the multimedia cases and video vignettes used in these courses were selected by the instructors so that preservice teachers could systematically observe children while they read and write. These technologies were used because the focus is on the child instead of the teacher or the classroom.<\/p>\n Another cognitive reason that these technologies were used was to provide anchored instruction (Cognition and Technology Group at Vanderbilt, 1991) regarding systematic observation. The preservice teachers could help one another think about systematic observation from each of their perspectives and, thereby, foster reflective thinking (Schon, 1983; Shulman, 1992) about systematic observation. Hughes, et al. (2000a) asked preservice teachers to describe their experience with video vignettes. The participants reported that when classmates viewed the same vignettes, their class discussions were enriched because they shared a common anchored experience. Baker and Wedman (2000) found similar reports with a group of preservice teachers who used multimedia cases. Specifically, the participants stated that the use of multimedia cases in their course (which served as a shared anchor) helped them share and discuss their field experiences (which were different for each participant).<\/p>\n Finally, the two technologies were used for pedagogical reasons. The multimedia cases and video vignettes allowed instructors to shift from lectures about systematic observation to problem-based generative learning (Cammack & Holmes, 2002; Christenson, Gervin, & Sweet, 1991; Risko, McAllister, Peter, & Bigenho, 1994). In other words, the preservice teachers could observe the same child (via video or multimedia) and generate what they thought was salient information. Problem-solving was required because preservice teachers had to determine when and how children demonstrate reading abilities.<\/p>\n Hughes, et. al. (2000b) conducted a study in which teachers enrolled in a graduate literacy course had the option of using video vignettes to support their course work. They found that teachers who relied on the video to solve problems posed by the instructor were better able to support their claims about teaching reading. In other words, the video vignettes fostered problem-based generative learning. Baker and Wedman (2000) found that preservice teachers enrolled in a course using multimedia cases went from generating 42% of the discussion to generating 100% of the discussion within five class meetings. In other words, multimedia cases fostered generative learning experiences. In a phenomenological study, Baker (2005) found that 100% of a group of preservice teachers using multimedia cases perceived that they had grown as literacy teachers. In addition, they attributed their growth to the use of multimedia cases above field experiences. The problem-solving opportunities are one aspect of why the multimedia cases were valued.<\/p>\n Although the two technologies were used for sociocultural, cognitive, and pedagogical reasons, we wanted to know if the preservice teachers developed systematic observation skills. In the next section, the specific technologies are described and how each was used. The results of pre\/post test measures of systematic observations are then reported.<\/p>\n The following questions guided this study:<\/p>\n Story 1: Using ChALK to Develop Systematic Observation Skills<\/p>\n Description of ChALK<\/p>\n ChALK (see http:\/\/web.missouri.edu\/~umccoechalk\/<\/a>) is a multimedia software package that, at the time of this study, consisted of eight CD-ROMs containing reading and writing samples of three first-grade classmates: Helen, Kenneth, and Zane. These samples were collected in the children’s classroom from September through May of their first-grade year and capture reading and writing during literature, math, science, and social studies instruction.<\/p>\n The interface offers the following features: (a) list of child\u2019s reading and writing samples, (b) video window, (c) scanned artifact window, (d) scenario window that contains an explanation of the video setting, (e) the ability to sort by date and content area, (f) the ability to create random access to portions of video clips we call \u201cBookmarks,\u201d and (g) icons indicating whether the sample features reading or writing (see Figure 1). More specifically, the list of the child\u2019s work features titles of what the child read or wrote, the date the child did the reading or writing, and the duration of the video associated with each reading or writing. Clicking on a title allows users to access video, scanned images, and scenarios pertaining to that title. Each title includes an icon designating whether it is a reading sample (icon of a book) or writing sample (icon of a pencil).<\/p>\n The video window allows users to see the edited digital video clips of the child reading or writing. The scanned artifact window allows users to see what the child was reading or writing. The scenario includes a description of the video setting and text from books the child is reading in the video. In addition, users can print out artifacts and scenarios and thereby create running records and anecdotal notes that can be systematically analyzed. The sorting feature lets users access the samples by date and content area. This feature allows users to pull up the child\u2019s work from any given month.<\/p>\n Because reading and writing occur throughout the elementary curriculum (i.e., literature, math, science, and social studies), users can review the child\u2019s reading and writing samples in a particular content area. The sort feature allows users to see a list of, for example, only math or only science items. Users can also combine the sort features. For example, users could sort for December social studies and see only those items. The Bookmark feature allows users to create a list of video segments to which they want to return without having to sort through all of the video again. For example, users may identify clips they want to discuss further with classmates or the instructor. They can create their own list of Bookmarks, click on any Bookmark, and randomly access the clips they want to review.<\/p>\n <\/p>\n Figure 1<\/strong>. ChALK interface<\/em>.<\/p>\n How ChALK was Used<\/p>\n The ChALK-based section of preservice teachers used a computer-lab classroom throughout the semester. After the pretest was administered (class sessions 2 and 3) the instructor told the preservice teachers they would be using ChALK throughout the semester to develop systematic observation skills. The instructor modeled how to use ChALK by accessing Zane\u2019s first reading sample, orally reading the related scenario, showing the video clip, and showing the related artifacts. A discussion ensued about the need for systematic observation, and the instructor recommended various books and chapters they could read in order to learn what to look for when observing children reading. During the next class, the preservice teachers shared what they read and started using the Zane CDs at their desks.<\/p>\n Throughout the semester, the ChALK users were asked to analyze and keep track of Helen\u2019s, Kenneth\u2019s, and Zane’s literacy development. At the beginning of the semester the observations were done in class. After the group was familiar with the interface, they analyzed specified ChALK segments for homework. Some participants chose to do the homework in small groups and others chose to do it independently.<\/p>\n The instructor scaffolded ChALK observations by providing study guides (Barnes, Christensen, & Hansen, 1994) that specified observation tasks. For example, the first study guide asked, \u201cWhile watching Zane read and write in November, how would you describe his literacy abilities?\u201d Another study guide asked, \u201cIt is January and you are Zane\u2019s teacher. What would you plan for him tomorrow? Explain.\u201d Later in the semester the study guide asked, \u201cBased on your observations of Zane throughout the school year, how has Zane grown in his literacy abilities? Come to class ready to have an end-of-year conference with Zane\u2019s parents.\u201d The students responded to the study guides using analysis techniques such as anecdotal notes, checklists, and running records. During class, the ChALK users discussed their answers and cited data from ChALK that supported their conclusions about Helen\u2019s, Kenneth\u2019s, and Zane\u2019s literacy development.<\/p>\n Story 2: Using Video to Develop Systematic Observation Skills<\/p>\n Description of Videos<\/p>\n The videos were noncommercial, unedited segments of first-grade children writing and reading in small groups and individual settings during classroom literacy activities. The segments focused on individual children were selected to provide examples of a range of reading and writing abilities and included the teacher working with the children in a variety of ways.<\/p>\n For example, one segment showed the teacher instructing a small group of emergent readers using a story from a basal reader. The teacher led the children through a picture walk, discussed specific words throughout the picture walk (i.e., compound words, words with endings), and read the story aloud with the children. This particular video segment focused on one child whose sight vocabulary and decoding skills were beginning to develop. The video was an over-the-shoulder view of the child who could be seen using picture and context clues to figure out unknown words, using his finger to point to each word, and covering parts of a word to decode it. The audio portion of the video also captured the child\u2019s talk as he worked to recognize words and to respond to the teacher’s directions and questions. The child could also be heard reading the story aloud with the group.<\/p>\n A second video showed a child reading aloud to the teacher, who was completing a running record that included oral reading and comprehension. In this segment, the child read a storybook the teacher had identified as being appropriate for the child\u2019s ability. After the child read the story aloud, the teacher engaged him in discussion by asking questions. The video recorded the child\u2019s oral reading as well as all of the discussion occurring between the child and teacher.<\/p>\n A third video showed a child writing in the science area. The child was observing a lizard and writing his observations in a science notebook. He used invented spelling to write his words and also asked another student how to spell some words. The video included over-the-shoulder views of the child\u2019s writing and the talk occurring between him and other students.<\/p>\n All video segments had characteristics similar to the three described above. In all instances, the segments were close-up shots focusing on one child, and the child\u2019s reading and talk were clearly audible. The teacher was included in most of the segments either during instruction, conferencing, or assessing. The print material from which the child read was visible or available in hard copy to the preservice teachers. The child\u2019s writing was also available in hard copy to preservice teachers.<\/p>\n How Videos Were Used<\/p>\n The videos were used to provide a classroom context for the course content. The content was organized by broad topics that included literacy theory, word identification, comprehension, writing, and assessment\/evaluation. The instructor implemented each topic over several class sessions using the following format:<\/p>\n Systematic observation of children\u2019s literacy growth was emphasized throughout the course. Observation techniques included anecdotal notes written during the administration of running records and during teacher\/student conferences. Observation included identifying characteristics such as a child\u2019s literacy strengths, weaknesses, processes, attitudes, interests, and work habits.<\/p>\n For example when the preservice teachers were learning to administer running records, the instructor first explained what a running record is used for, what it measures, and how it is administered and evaluated. The preservice teachers then practiced coding while listening to a child orally read a story on audiotape.<\/p>\n Next, the preservice teachers viewed the video segments two times, both during class time. The first viewing focused on the teacher\u2019s technique for administering the assessment. The preservice teachers were asked to observe how the teacher related to the child, how she administered the running record, and how she discussed the story with the child. The preservice teachers wrote their observations while viewing the segment. The preservice teachers and the instructor then discussed their observations.<\/p>\n The second viewing of the same video segment focused on the child. The preservice teachers coded the story text as the child read it and wrote observations of the child\u2019s reading behaviors. During the next class session, the instructor and preservice teachers discussed the scoring of the running record and the implications the evaluation had for instruction. The preservice teachers were assigned to complete a running record during their upcoming field experience and to submit it to the instructor for feedback.<\/p>\n Video segments were used during all of the broad topics included in the course. They were used (a) to help preservice teachers develop systematic observation skills that consisted of discovering a child\u2019s literacy behaviors and instructional needs and (b) to examine a teacher\u2019s literacy practice. The video segments were always viewed during class time followed by in-depth discussion guided by the instructor.<\/p>\n Research Design<\/p>\n This study used sequential mixed methods (Tashakkori & Teddlie, 1998). Specifically, qualitative data were collected (written answers to written open-ended questions). Using qualitative typological analyses (Hatch, 2002; LeCompte & Preissle, 1993), the written answers were broken into discrete data units, which were tallied and computed into frequency counts. Finally, to compare pre- and posttest scores, the frequency counts were converted to Z-Scores and two-directional t<\/em>-tests were computed.<\/p>\n To ensure validity, participants were randomly enrolled in one of three sections of the same block of courses, each of which adhered to the same course content and objectives. All instructors used the same instructional format, which included a variety of readings, demonstrations, and reflective writings directed toward particular objectives. All participants engaged in field experiences to plan and teach lessons to children in first or second grade classrooms. To develop systematic observation skills, one section used ChALK while another section used videos. The instructor of the third section opted not to participate in this study.<\/p>\n Participants and Setting<\/p>\n The participants (N<\/em> = 54) were junior Elementary Education majors of which 49 were female and 5 were male. They attended a large midwestern US university and met the criteria for being admitted into the College of Education that included a minimum 2.75 grade point average, an ACT score of at least 21, completion of 8 hours of introductory education courses, and observations in K-12 classrooms for a minimum of 20 hours. Twenty-six participants were randomly enrolled in the ChALK-user section while 28 participants were randomly enrolled in the video-user section.<\/p>\n Each section functioned as an intact cohort randomly enrolled in a 9-hour block of courses: Emergent Literacy (3 credit hours), Emergent Language (2 credit hours), Children\u2019s Literature (2 credit hours), and field experience (2 credit hours). The focus of this block of courses was on literacy skills, assessment processes, and instructional strategies appropriate for teaching first-grade through third-grade children. In order to maintain content consistency in the courses across different sections, instructors of the same course in each block used the same course goals and instructional objectives to develop learning experiences.<\/p>\n For example, each instructor of the Emergent Literacy course used the following course topics to ensure that the preservice teacher\u2019s learning experiences were focused toward the same literacy content: (a) the theoretical foundations that support literacy acquisition, (b) emergent reading and writing processes, (c) instructional strategies supporting emergent readers, (d) formal and informal assessment strategies, (e) writing for different purposes and audiences, (f) classroom management and organization, and (g) curriculum and teaching strategies based on students\u2019 interests, cultural and ethnic backgrounds, and physical and mental abilities.<\/p>\n Field experience for both sections occurred in first- or second-grade classrooms in public school settings for a 2-hour period on each Tuesday and Thursday morning throughout the academic semester. During the field experience, the participants worked with small groups of children with whom they performed assessments (e.g., observations, running records, individual conferences) and planned and implemented literacy lessons. The lessons incorporated the literacy curriculum and teaching procedures learned in the literacy block of courses. After each field experience session, the participants wrote reflections examining the strengths and weaknesses of their lessons and observations of the children\u2019s literacy abilities. The reflections were submitted to field experience supervisors, who observed the participants in the classroom and provided suggestions and comments for improvement.<\/p>\n The ChALK users developed systematic observation skills by discussing and analyzing the students shown via ChALK, while the video users developed observation skills via video segments. Both groups used their observations to identify students\u2019 developing literacy strengths and needs and to discuss appropriate instruction.<\/p>\n Data Sources<\/p>\n Both groups took the same pretest and posttest at the beginning and end of the semester. The pretest and posttest involved watching the same 13-minute video of a first-grade child orally reading a story to his teacher. The child looked at all of the illustrations in the book, predicted what the book might be about, and read the book aloud to his teacher. The teacher asked the child about his word attack strategies when he got stuck on words and occasionally told him words he did not know. After viewing the video, the participants responded in writing to the question, \u201cWhat do you notice about the child\u2019s reading?\u201d The participants had no time limit for their responses.<\/p>\n Data Analysis<\/p>\n To address the stated research questions, four measures of the data were analyzed (see Table 1). The first question focused on whether participants observed a range of the components of reading. To address this question, we used typological analyses (Hatch, 2002), in which the data were \u201cdivided into groups or categories on the basis of some canon for disaggregating the whole phenomenon under study\u201d (LeCompte & Preissle, 1993, p. 257).<\/p>\n <\/a>Table 1<\/strong> <\/p>\n\n
\n
\nResearch Questions, Measures Analyzed, and Examples of Scoring <\/em><\/p>\n