Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance educators in the United States Contemporary Issues in Technology and Teacher Education, 9(1). https://citejournal.org/volume-9/issue-1-09/general/examining-tpack-among-k-12-online-distance-educators-in-the-united-states

Examining TPACK Among K-12 Online Distance Educators in the United States

by Leanna Archambault, Arizona State University; & Kent Crippen, University of Nevada Las Vegas

Abstract

With the increasing popularity and accessibility of the Internet and Internet-based technologies, along with the need for a diverse group of students to have alternative means to complete their education, there is a major push for K-12 schools to offer online courses, resulting in a growing number of online teachers. Using the Tailored Design survey methodology (Dillman, 2007), this study examines a national sample of 596 K-12 online teachers and measures their knowledge with respect to three key domains as described by the TPACK framework: technology, pedagogy, content, and the combination of each of these areas. Findings indicate that knowledge ratings are highest among the domains of pedagogy, content, and pedagogical content, indicating that responding online teachers felt very good about their knowledge related to these domains and were less confident when it comes to technology. Correlations among each of the domains within the TPACK framework revealed a small relationship between the domains of technology and pedagogy, as well as technology and content (.289 and .323, respectively). However, there was a large correlation between pedagogy and content (.690), calling into question the distinctiveness of these domains. This study presents a beginning approach to measuring and defining TPACK among an ever-increasing number of K-12 online teachers.

 

Although online distance education has become established in higher education, it is a relatively new area within the K-12 field. Recent survey data show that about one third of K-12 public school districts (36%) had students enrolled in online distance education courses in the 2002-2003 school year. Estimates of student enrollment in K-12 online learning programs have increased from 40,000-50,000 students during the 2001-2002 school year to more than 520,000 in the 2004-2005 school year (McLeod, Hughes, Brown, Choi, & Maeda, 2005) to recent projections of over a million students (Cavanaugh & Blomeyer, 2007). The latest prediction is that in 6 years 10% of all high school classes will be offered online, and by 2019 this figure will increase to 50% (Christensen & Horn, 2008). The movement toward K-12 online distance education is happening for a variety of social, economic, and political reasons including offering courses at lower cost, offering high-quality courses beyond a limited geographical area, and individualizing content to meet student needs. With the increasing number of virtual schools at the elementary and secondary level, the need arises to begin examining the role and preparation of teachers in K-12 online environments. In bringing teacher preparation into the 21st century, the role of the K-12 online instructor is becoming increasingly important.

Pedagogical Content Knowledge

In his landmark paper, Those Who Understand: Knowledge Growth in Teaching, Lee Shulman (1986) introduced the concept of pedagogical content knowledge (PCK). He raised the issue of the need for a more coherent theoretical framework with regard to what teachers should know and be able to do, asking important questions such as, “What are the domains and categories of content knowledge in the minds of teachers?” and “How are content knowledge and general pedagogical knowledge related?” (p. 9). To describe the relationship between content knowledge (or the amount and organization of knowledge of a particular subject matter) and pedagogical knowledge (knowledge related to how to teach various content), Shulman developed the idea of PCK. He defined PCK as going beyond content or subject matter knowledge to include knowledge about how to teach particular content. Within PCK, he included “the most useful forms of representation of those ideas, the most powerful analogies, illustrations, examples, explanations, and demonstrations—in a word, the ways of representing and formulating the subject that make it comprehensible to others” (p. 9).

Shulman also stated that knowledge of what makes a subject difficult or easy to learn is a part of PCK. This means that in order to be able to teach a particular topic effectively, teachers should know the potential pitfalls to which students frequently fall victim, depending on the preconceptions they have developed based on their ages and backgrounds. According to Shulman,

If those preconceptions are misconceptions, which they so often are, teachers need knowledge of strategies most likely to be fruitful in reorganizing the understanding of learners, because those learners are unlikely to appear before them as blank slates. (pp. 9-10)

Within the context of the virtual learning environment, the concept of PCK is particularly relevant. Because there is a shift to a knowledge building approach to learning, the focus in online teaching necessarily becomes more centered around how the course is structured, with special emphasis on the teaching materials used. Teachers in the virtual classroom needs to be overtly aware of the common misconceptions centered around the particular topic within the content they are teaching, so they can be addressed as part of the curriculum and instruction. Online educators also need to be aware of the importance of encouraging and teaching specific self-regulated behaviors to their students to ensure every possible chance for success.

Many strategies for teaching self-regulated behaviors relate specifically to Shulman’s notion of PCK, in that they involve the use of cognitive strategies such as modeling, analogies, and metaphors to aid in understanding the content-related material. Teachers must be able to translate and contextualize information to improve students’ understanding and motivation for learning. In order to be able to create such materials and implement these types of strategies, online teachers need to have not only an excellent grasp of their given content area but also an appreciation of how technology and the online environment affect the content and the pedagogy of what they are attempting to teach. To address such issues, Koehler and Mishra (2005) built on Shulman’s notion of PCK to articulate the concept of technological pedagogical content knowledge (TPCK; referred to in the paper as technology, pedagogy, and content knowledge or TPACK).

Technological Pedagogical Content Knowledge

TPACK involves an understanding of the complexity of relationships among students, teachers, content, technologies, and practices. According to Koehler and Mishra (2005), “We view technology as a knowledge system that comes with its own biases, and affordances that make some technologies more applicable in some situations than others” (p. 132). Using Shulman’s (1986) PCK framework and combining the relationships between content knowledge (subject matter that is to be taught), technological knowledge (computers, the Internet, digital video, etc.), and pedagogical knowledge (practices, processes, strategies, procedures, and methods of teaching and learning), Koehler and Mishra defined TPACK as the connections and interactions between these three types of knowledge.

Good teaching is not simply adding technology to the existing teaching and content domain. Rather, the introduction of technology causes the representation of new concepts and requires developing a sensitivity to the dynamic, transactional relationship between all three components suggested by the TPCK framework. (p. 134)

In examining how teachers should be prepared to teach in online environments, TPACK addresses each of the three major components needed to ensure high quality instruction. This lens offers a way for teacher education programs to begin looking at how these elements are currently covered and how they would need to be altered to specifically meet the needs of teachers entering online classrooms. As Niess (2005) wrote,

TPCK, however, is the integration of the development of knowledge of subject matter with the development of technology and of knowledge of teaching and learning. And it is this integration of the different domains that supports teachers in teaching their subject matter with technology. (p. 510)

Niess also outlined four components that offer a framework for the development of TPACK in teacher education programs: (a) an overarching understanding of teaching a particular subject using technology to facilitate student learning, (b) knowledge of instructional strategies and representations for teaching a particular topic through the use of technology, (c) knowledge of students’ misconceptions, understandings, thinking, and learning in a particular subject matter and how these might be represented using technology, and (d) knowledge of curriculum materials that implement technology to enhance learning in a given content area.

The implications are important for using the TPACK framework to examine issues related to online teaching. Specifically, it allows the researcher to focus on important aspects, defined by the extensive literature on high quality online teaching. As Mishra and Koehler (2006) wrote,

For instance, consider faculty members developing online courses for the first time. The relative newness of the online technologies forces these faculty members to deal with all three factors, and the relationships between them, often leading them to ask questions of their pedagogy, something that they may not have done in a long time. (p. 1030)

Although creating the concept of TPACK by adding the element of technology to Shulman’s notion of PCK makes sense on the surface, it remains to be determined if knowledge in each of these domains truly exists and, if so, how these elements can be accurately measured. One of the issues with PCK, and subsequently with TPACK, is that the domains seem confounded and are difficult to separate and measure (Gess-Newsome & Lederman, 1999; McEwan & Bull, 1991). Qualitative methods, such as an in-depth case study, could probe teachers’ conceptualizations and implementation of TPACK, but another method to begin examining and measuring TPACK among a large group of teachers is through quantitative methods, specifically through the use a survey methodology using a carefully developed questionnaire. To begin measuring the TPACK framework, this study sought to examine K-12 online teachers’ knowledge levels with respect to each of the domains described by the TPACK framework with a total of 596 survey responses.

The following section discusses the methodology of this study in detail, including descriptions of the surveyed population, development of the instrument, piloting of the instrument, and deployment procedures in order to answer the research questions:

  • What is the perceived knowledge level of those who teach in an online environment specific to technology, pedagogy, and content, including the combinations of these domains?
  • What do teachers’ ratings of their perceived knowledge levels related to TPACK say about the framework itself?

Methodology

Survey Population

A nonrandom purposeful sample was used to gather as many online teacher responses as possible. This technique is described by Patton (1990) as the process of selecting specific information-rich cases from which the investigator can learn significant information central to the research. In this case, criterion sampling was used to select participants based on predetermined characteristics, specifically, educators who currently teach at least one class in a state-sanctioned K-12 virtual school. To yield the most representative sample possible, the survey was sent to as many K-12 online distance educators in the United States as possible from as many states as possible. Email addresses for K-12 online distance educators in the United States available to the public through various virtual school Web sites were gathered and compiled. To find these email addresses, searches were conducted for specific state-sponsored schools identified by the Keeping Pace With K-12 Online Learning report (Watson & Ryan, 2006).

This Web-based survey was deployed in January 2008 to 1,795 online teachers employed at virtual schools from across the nation. A prenotification email was sent out informing potential respondents of the survey, followed by an email containing a link to the instrument. Three subsequent reminders were then sent out to nonrespondents over the course of a month. A total of 596 responses from 25 different states were gathered, which represented an overall response rate of 33%. This response rate was considered acceptable and higher than many Web-based surveys (Manfreda, Bosnjak, Berzelak, Haas, & Vehovar, 2008; Shih & Fan, 2008).

Development and Revision of the Instrument

The survey instrument used in this study was first created by the authors in a prior research project surveying online teachers in Nevada (Archambault & Crippen, 2006). Since that project, the instrument underwent numerous revisions during a 2-year time span, including a formative evaluation to better capture data related to the characteristics of K-12 online distance educators. The instrument employed the use of TPACK as a guiding framework for skills that online teachers should know and be able to do. It included 24 items designed to measure online teachers’ knowledge.

Respondents were asked,“How would you rate your own knowledge in doing the following tasks associated with teaching in a distance education setting?” Responses were given in the form of a 5-point Likert-type scale (1 = Poor, 5 = Excellent). Using the domains of content, pedagogy, and technology, as well as each of the overlapping areas created by the blending of these areas (i.e., technological content, technological pedagogy, content pedagogy, and technological pedagogical content knowledge), three to four items were written in each area to attempt to measure online teachers’ perceptions of their knowledge (appendix). These items were written based on definitions provided by Koehler and Mishra (2005) and Shulman (1986). Respondents were also asked open-ended responses regarding their overall experience with K-12 online teaching.

When dealing with conceptual frameworks such as TPACK, construct validity for elements of the model must be established. According to Gall, Gall, and Borg (2003), construct validity is “the extent to which inferences from a test’s scores accurately reflect the construct that the test is claimed to measure” (p. 620). Following Dillman’s (2007) methodology, items were created by the first author and then reviewed by two knowledgeable technology education experts who have extensive experience with online teaching. A number of ongoing discussions took place regarding survey items, both at the inception of the original instrument and throughout the revision of the current instrument. Based on feedback from the experts, several changes were made to the instrument. In particular, formatting of the instrument underwent several revisions, including breaking the survey up into five separate Web pages, adding a percentage bar at the top of the survey that showed respondents how much they had completed and how much they had left to finish, and creating a mouse-over feature showing the stem of questions. Having experts review the instrument to ensure that items were complete, relevant, and arranged in an appropriate format was important to establish an adequate level of content validity.

Because validity requires that the items adequately measure the proposed constructs and that respondents correctly interpret what each item is asking, piloting of the survey was essential. Piloting of the survey was conducted in cooperation with K-12 online teachers at a local online virtual school. The following section describes the piloting process.

Phase 1 of Think-Aloud Pilot

Although content validity can be established by having the instrument reviewed by experts, construct validity can begin to be verified by using a think-aloud strategy with interview participants while they read and answer survey items (Dillman, 2007; Fowler, 2002). Participants are asked to explain what they are thinking as they go through each question of the instrument. Responses can then be compared from one person to the next to ensure that the questions are being interpreted in the same way, are easy to understand, and are arranged in a logical sequence.

To begin the piloting process, a think-aloud was conducted in two phases with six teachers from a local online virtual school. Each of the teachers interviewed taught within the secondary department, and one of the teachers also served in an administrative capacity. In the first phase of the think-aloud pilot, the first author met with three of the six teachers at the school’s central office. Interviews with the teachers were audio recorded and transcribed verbatim. The purpose of this first phase was to ensure that survey questions were being understood in the same manner and to gather suggested changes that would make specific items clearer and easier to understand.

Teachers participating in the think-aloud understood the instrument formatting, but had a difficult time understanding what they were being asked to rate when each of the items began with a verb, such as “Use a variety of teaching strategies to relate various concepts to students.” To make the items easier to understand, the phrase “My ability to” was added to each stem for clarity. As one teacher stated, “I really think if you could direct these questions back to the user, it would make more sense….If it said, ‘your ability to’ that would help me out here.” In addition, instead of beginning with an item that covered multiple domains, such as PCK, one think-aloud participant suggested that the instrument start with a simpler item that had initially appeared later in the survey. The consensus among the think-aloud participants was that starting with less complex items to help respondents become familiar with the layout would be beneficial.

In addition to changing the order of the items a, b, and c, the wording for items w and x was changed to make them clearer, easier to understand, and more active. For example, Item w initially read, “Use technology to create effective representations of content that depart from textbook knowledge.” This was changed to a “My ability to create effective technological representations of content that depart from textbook knowledge.” Item x was also changed from “Meet the overall demands of my online teaching assignment” to “My ability to meet the overall demands of online teaching.” This was to clarify the term teaching assignment, which presented some confusion.

Overall, teachers completing the think-aloud pilot provided excellent feedback for improvements to the instrument. By making their suggested changes, the survey was improved to ensure that questions were easily understood and were being understood in the same manner. The goal of gathering and implementing suggested changes that would make specific items clearer and easier to understand was met in this first phase of the pilot.

Phase 2 of Think-Aloud Pilot

Once changes to the survey from the initial think-aloud pilot were made, the second phase of the think-aloud focused specifically on items of the following question: “How would you rate your own knowledge in doing the following tasks associated with teaching in a distance education setting?” The purpose in doing so was to take a first step in establishing construct validity by ensuring that participants were interpreting the items consistently. In addition, the researcher needed to check to see that interpretations of each subscale were in line with the intent of the items.

For the second phase of the think-aloud pilot, the lead researcher met with three different teachers from the local online school who all taught various classes online. They represented subject areas of mathematics, social studies, and computer applications, with an average of 7 years of experience in teaching online. Think-aloud participants were given a printed description of each of the seven subscales: Pedagogy, Content, Technology, Technological Content, Technological Pedagogy, Content Pedagogy, and Technological Pedagogical Content. After discussing the definitions, think-aloud participants were then asked to read each item aloud and consider under which category they thought the item fit.

Participants consistently identified single domain items of technology correctly, as well as items that covered all three domains (TPCK). The difficulty they encountered was trying to decide between issues of pedagogy and content. A common theme emerged among the think-aloud participants. They were challenged with separating out specific issues of content and pedagogy. For example, Item d – “My ability to decide on the scope of concepts taught within my class” was interpreted by two of the participants as being part of the pedagogical content domain rather than the single content domain, as intended by the researcher. The same misinterpretation happened with Item b – “My ability to create materials that map to specific district/state standards.” The same two teachers thought that this issue was relate to pedagogy rather than content.

Along with the confusion between content and pedagogy, the other issue was the occasional identification of technology within an item that did not specifically deal with any technological-related issues. For example, one teacher identified Item f – “My ability to distinguish between correct and incorrect problem solving attempts by students” as dealing with elements of all three domains, instead of simply PCK. This participant had the same error for Item j, which may be related to the fact that he teaches computer applications and programming classes, so his content is inextricably linked to technology.

Despite the confusion between content and pedagogy, one of the teachers participating in the think-aloud correctly identified all of the items, with the exception of four items intended as either technological pedagogy or technological content (which he interpreted as having elements of all three, TCPK). Overall, think-aloud participants correctly identified at least one of the domains for all of the items. Specifically, items a, i, k, l, n, q, u, w, and x had 100% agreement among all three online teachers, and their ratings matched the intended domain of the item.

The important consideration from this phase of the pilot was that items were being interpreted consistently from one participant to the next. Even though the researcher had clear notions of the specific domains and the distinctions among them, the online teachers had notions of pedagogy and content as being linked as one domain. This should be noted, especially when interpreting the results. Despite this finding, the three think-aloud participants demonstrated a common understanding and interpretation from item to item.

Reliability

According to Czaja and Blair (2005), “The reliability of data obtained through survey research rests, in large part, on the uniform administration of questions and their uniform interpretation by respondents” (p. 73). Using a Web-based self-administration of the survey instrument ensured a consistent delivery of the survey, and pilot testing assisted in establishing content and construct validity. In addition, subscales used in the original survey developed by Archambault and Crippen (2006) to measure areas related to pedagogy, content, and technology were found to demonstrate a sufficient level of reliability (alpha = .738, .911, and .928, respectively). For the currently study, reliability testing in the form of Cronbach’s alpha coefficient was conducted for each of the subscales to determine the level of internal consistency. These levels were acceptable, (Gall, Gall, & Borg, 2003) ranging from alpha = .699 for the technology content domain to alpha = .888 for the domain of technology (Table 2).

Data Analysis

Analyses of the resulting data were performed using both descriptive and inferential statistics. Descriptive measures including mean and standard deviation for items a through x were calculated to answer the question, “How would you rate your own knowledge in doing the following tasks associated with teaching in a distance education setting?” These descriptive statistical measures were also tabulated and reported for each subscale, which include the following categories: Pedagogy, Content, Technology, Technological Content, Technological Pedagogy, Content Pedagogy, and Technological Pedagogical Content. Inferential statistics including Pearson’s product-moment correlation were used to determine the relationship among teacher ratings of their knowledge levels along the TPACK framework.

Results

Online teachers responding to the survey represented 25 different states, including Alaska, Arkansas, Arizona, California, Colorado, Connecticut, Florida, Georgia, Idaho, Illinois, Kansas, Minnesota, North Carolina, North Dakota, Oklahoma, Oregon, Pennsylvania, South Carolina, South Dakota, Texas, Utah, Virginia, Washington, and Wisconsin. Of these states, the majority of responses came from Pennsylvania (14.4%), Idaho (13.6%), Arizona (10.2%), Nevada (9.1%), Colorado (7.2%), and Florida (7.2%).

To address the question of perceived knowledge level of those who teach in an online environment specific to technical expertise, online pedagogy, and content area, respondents were asked, “How would you rate your own knowledge in doing the following tasks associated with teaching in a distance education setting?” Twenty-four items along the areas of technology, pedagogy, content, and the combination of these areas were asked, and the scale for answering consisted of 1 (Poor), 2 (Fair), 3 (Good), 4 (Very Good), and 5 (Excellent).

The average mean for all items was 3.81. The range of responses was 4, with a minimum response of 1, a maximum response of 5, and a standard deviation of .939. The number of respondents, mean, and standard deviation are reported for each item in the Table 1 and for each domain in Table 2.

Table 1
Summary of Descriptive Statistics Results for the Question, “How Would You Rate Your Own Knowledge in Doing the Following Tasks Associated With a Distance Education Setting?

 

Subscale 

Item

Responses

Mean

Standard Deviation

Pedagogy

c

556

4.18

.765

Pedagogy

j

547

4.01

.769

Pedagogy

r

542

3.92

.802

Technology

 a

559

3.20

1.12

Technology

g

555

3.44

1.12

Technology

q

545

3.04

1.14

Content

 b

558

3.98

.929

Content

d

554

4.05

.888

Content

m

542

4.03

.840

Pedagogical Content

 f

555

3.98

.834

Pedagogical Content

i

553

3.91

.772

Pedagogical Content

s

542

4.23

.810

Pedagogical Content

u

541

4.04

.781

Technological Content

 o

541

3.81

1.04

Technological Content

t

533

4.01

.937

Technological Content

v

537

3.79

1.11

Technological Pedagogy

 h

554

3.87

.955

Technological Pedagogy

l

542

3.76

.934

Technological Pedagogy

n

538

3.57

1.12

Technological Pedagogy

p

541

3.40

1.10

Technological Pedagogical Content

e

555

3.79

.999

Technological Pedagogical Content

k

545

3.53

.931

Technological Pedagogical Content

w

541

3.76

.983

Technological Pedagogical Content

x

548

4.07

.874

Table 2
Summary of Descriptive Statistics for Subscales for the Question, “How Would You Rate Your Own Knowledge in Doing the Following Tasks Associated With a Distance Education Setting?

 


Domain 

Number of Items

Number of Responses

Mean

Standard Deviation

Cronbach’s Alpha

Pedagogy

3

1,645

4.04

.779

.772

Technology

3

1,659

3.23

1.12

.888

Content

3

1,654

4.02

.886

.761

Pedagogical Content

4

2,191

4.04

.805

.799

Technological Content

3

1,611

3.87

1.03

.699

Technological Pedagogy

4

2,175

3.65

1.03

.772

Technological Content Pedagogy

4

2,189

3.79

.947

.785

In addition to descriptive statistics measuring online teachers’ perceptions of their knowledge with relationship to TPACK, correlations among each of the domains described by the framework were examined. These correlations are reported in Table 3.

Table 3
Correlations Among Subscale Variables for the Question, “How Would You Rate Your Own Knowledge in Doing the Following Tasks Associated With a Distance Education Setting?

 

1.

2.

3.

4.

5.

6.

7.

1. Pedagogy

2. Content

.690**

3. Technology

.289**

.323**

    
4. Pedagogical Content

.782**

.713**

.278**

5. Technological Pedagogy

.544**

.540**

.488**

.561**

6. Technological Content

.488**

.557**

.555**

.526**

.743**

7. Technological Pedagogical Content

.595**

.544**

.570**

.609**

.787**

.773**

**Correlation is significant at the 0.01 level (2-tailed).

Discussion

K-12 online teachers responding to the current survey rated their knowledge at the highest levels for the scales of pedagogy (4.04), content (4.02), and pedagogical content (4.04). These average mean scores indicate that teachers report that their knowledge is very good related to their abilities to use a variety of teaching strategies, to create materials that map to district standards, to plan the scope and sequence of topics within their course, as well as skills that require the aspects of both pedagogy and content, such as the ability to recognize student misconceptions about a particular topic and the ability to distinguish between correct and incorrect problem solving techniques on the part of students.

The highest rated individual item also fell within the category of pedagogical content, the ability to comfortably produce lesson plans with an appreciation for the topic (Item s) with an average response of 4.23. This result suggests that these online teachers are most comfortable with aspects of traditional teaching and that they have the most experience with skills associated with face-to-face teaching.

Knowledge levels dropped by almost an entire point (.81) from the domains of pedagogy and content to technology. Online teachers responding to this survey felt that their knowledge associated with troubleshooting computer hardware or software related problems was not as strong as their knowledge related to pedagogy and content. The lowest individually scored item fell within the area of technology, rating their ability to assist students with troubleshooting technical problems with their personal computers (Item q) at 3.04, which translates to a distinction of Good. When technology was combined with content or pedagogy, scores rose to 3.87 and 3.65, respectively. These ratings are not as high as those associated with pedagogy and content alone, but not as low as the domain of technology by itself. In examining all three domains together, online teachers rated their skills at 3.79.

In examining the perceived knowledge levels of K-12 online teachers within the TPACK framework, it becomes evident that these teachers felt strongly about their abilities to perform as traditional teachers. They were less sure of themselves when it came to their skills associated with technology and using technology to convey content to students, but they still felt proficient and good at what they do. The theme of struggling with and learning new technology is one that is also evident throughout teachers’ open-ended responses on the survey. As one teacher described it,

My experience with online teaching can be described as better than I thought. I always believed I would be much better in person than through the computer, but I have found that I can still have relationships with students in this manner. I am not very competent with the computer but I am very strong in my subject matter. My students tend to be very good with the computer and not as competent in the Latin, so we make a good pair!

This sentiment seems to encapsulate how surveyed online teachers felt with regard to their knowledge within the TPACK framework. Their ratings suggest that their skills are strong within their content area and their ability to teach. The challenge comes when trying to apply what they know to the best way to communicate content to students through the use of technology. Despite this, they continue to find what works best, and they are determined to keep trying different methods and strategies in order to do so.

Six respondents specifically mentioned the ever-changing nature of online teaching, and the fact that they never taught their courses exactly the same way. They viewed their classes as works in progress. This finding is consistent with Lowes’ (2005) findings that K-12 online teachers continually made changes to improve their courses, especially the courses that they had previously taught face to face.

Within the current study, online teachers’ self-reported knowledge levels were highest specific to items related to pedagogy, content, and pedagogical content. This result could be for a variety of reasons, including their previous teaching experience within the traditional classroom. It could also suggest that teachers may have been best prepared by their teacher preparation program with regard to pedagogy and content and this, together with their experience in the classroom, led to the highest ratings of knowledge along these same domains. It could also be related to the activities of traditional teachers on a daily basis and that they are, therefore, most experienced in planning lessons, using teaching strategies to convey content, mapping content to district standards, and assessing students’ understanding of various topics. These are the foci of teacher education programs and make up a significant part of the instructional day. It is not surprising, then, that these areas had the highest ratings.

In addition to examining knowledge levels of responding K-12 online teachers, this study also looked at the correlations among each of the domains of the TPACK framework, including technology, pedagogy, content, pedagogical content, technological content, technological pedagogy, and technological pedagogical content knowledge. While the TPACK framework is a relatively new conceptual model (Koehler & Mishra, 2005) based on an older, more developed construct of PCK (Shulman, 1986), there is a lack of research to measure how these domains interact with one another. With the extensive literature base on PCK, this seems a logical place from which to begin examining TPACK. However, this literature is fraught with confusion regarding whether or not PCK is an actual domain. According to Gess-Newsome and Lederman (1999), while PCK has the makings of a good model, including providing a useful organizational structure for examining teacher knowledge, it has problematic issues with its ability to discriminate between its componential parts (precision) and its ability to provide a useful explanation of data (heuristic power). As the authors explained,

Precision can be judged by the discriminating value of the constructs included in the model, the relationship among constructs, and the match of this organization to the research data. Although PCK creates a home for the “unique” knowledge held by teachers (Shulman, 1987, p. 8), identifying instances of PCK is not an easy task. Within this volume, most authors agree that the PCK construct has fuzzy boundaries, demanding unusual and ephemeral clarity on the part of the researcher to assign knowledge to PCK or one of its related constructs (p. 10).

This model becomes even more complicated when adding technology to PCK and its inherent “fuzziness.” This complexity is evident from the data gathered from the current study. Correlations between pedagogy and content knowledge responses were high (.690) as were those between pedagogical content and content (.713) and pedagogical content and pedagogy (.782). These strong correlations confirm the questions raised by McEwan and Bull (1991) concerning whether or not pedagogy and content are separate fields. As they put it, “We are concerned, however, that this distinction between content knowledge and pedagogic content knowledge introduces an unnecessary and untenable complication into the conceptual framework on which the research is based…” (p. 318).

However, it should be noted that the high correlations between pedagogy and content fields may be a result of the survey items being confounded to begin with. This issue was found during the piloting of the instrument itself. Despite the efforts of the researchers to ensure that items related to pedagogy dealt specifically with teaching strategies and methods, while content domain items covered curriculum issues, online teachers who were interviewed saw them as linked. In particular, this thinking was evident with items b – “My ability to create materials that map to specific district/state standards” and d – “My ability to decide on the scope of concepts taught within my class.” Both items were challenging for think-aloud participants to correctly identify. They viewed Item b as dealing with pedagogy, and Item d as covering aspects of PCK.

Interestingly, think-aloud participants showed difficulty separating the domains of pedagogy and content, but did so consistently. It may be that teachers, especially those with a high level of teaching experience, view their content as being inextricably linked to the pedagogy they use to teach a particular topic. Because these areas are the most familiar to teaching, encompassing the day-to-day instructional activities of educators, it would stand to reason that online teachers would rate their knowledge high on items related to both pedagogy and content.

High correlations were also found between technological content and technological pedagogy (.743), and technological pedagogical content and both technological pedagogy (.787) and technological content (.733). These correlations call into question whether or not technology content, technological pedagogy, and technological pedagogical content knowledge are distinct domains as well. In contrast, the low correlations among technology and pedagogy as well as technology and content (.289 and .323, respectively), are more in line with what would be expected from separate domains.

Although the framework of TPACK is helpful from an organizational standpoint, especially because it brings the important area of content to the discussion, the data from this study confirm that it faces the same problems as that of PCK. The TPACK framework has practical appeal, providing an analytical structure for researching what teachers should know and be able to do and highlighting the importance of content knowledge when incorporating the use of technology. These are important elements, as currently a greater emphasis on the use of technology is needed as it pertains to specific subject matter. As Koehler and Mishra (2008) elaborated, “Instead of applying technological tools to every content area uniformly, teachers should come to understand that the various affordances and constraints of technology differ by curricular subject-matter content or pedagogical approach” (p. 22).

However, this appeal is tempered by the difficulty in measuring each of the constructs described by the framework. The inability to differentiate between and among these constructs is significant, as it calls into question its precision, or whether or not the domains exist independently. It also diminishes the heuristic value of the model, specifically, the extent to which the framework helps researchers predict outcomes or reveal new knowledge (Gess-Newsome & Lederman, 1999).

From the current data, it seems that from the onset, measuring each of these domains is complicated, muddled, and messy. The correlation data emerging from the current study do not support the distinction between and among each of the domains described by the TPACK framework. Again, this result did not come as a total surprise, as three online teachers who participated in a think-aloud pilot of the survey instrument experienced difficulty in trying to decide between issues of pedagogy and content. They were challenged with separating out specific issues of content and pedagogy. Despite efforts on the part of the research to ensure that all pedagogy items dealt specifically with teaching strategies and methods, while content items covered materials, including their scope and sequence, and mapping to state/district standards, these domains were seen as part and parcel of the basic activities of teaching rather than as distinct fields.

Although TPACK makes practical sense and does offer a useful organizational structure, adding the element of technology to Shulman’s (1986) notion of pedagogical content knowledge befuddles an already complex model. This study is not able to empirically validate the framework, but TPACK does present a way to organize key areas of high quality instruction incorporating the use of technology, along with offering important implications for examining issues related to online teaching. Specifically, it assisted the researchers in focusing on important aspects of effective teaching in an online distance education environment. However, further study is necessary to determine if and how the TPACK model can be validated or reconceptualized.

Limitations

Although a tremendous amount of data can be gained via a national quantitative study, a survey is inherently limited by its items and scales. As with all methods of data collection, Internet surveys have their own disadvantages (Fowler, 2002). One of these is not having a personal contact associated with the administration of the survey and no incentive to encourage participation. This limitation potentially resulted in a lower response rate (33%) than would occur with other types of surveys (Shih & Fan, 2008). The response rate significantly limits the ability of the researcher to generalize to the overall population of K-12 online teachers. This limited ability to make generalizations is a primary limitation of the current study. Accordingly, it should be noted that the reporting of results from the current study reflected a sample of K-12 online teachers and do not necessarily reflect the population as a whole.

Another limitation of this study is the fact that survey research consists of self-report rather than the measurement of observable behavior. Self-report is susceptible to a certain degree of bias. Despite the use of methods suggested by Fowler (2002) and Gall et al. (2003) to reduce the potential for social desirability bias, such as wording survey items with neutral language, self-administration of the instrument, and ensuring the anonymity of responses, it is possible that such bias occurred.

Finally, additional construct validation of the items used to measure the TPACK framework would be beneficial. These constructions are still in need of more extensive and thorough validation measures. This validation could be achieved through a factor analysis of the items, followed by a hierarchical multiple regression using the resulting factors to inform the TPACK model. This approach was beyond the scope of the current study and is an area for future research. This model remains to be validated, and data from the current study suggest that perhaps there is a different structure to describe the domains of technology, pedagogy, content, and their possible interactions. Although a difficult pursuit, it is an important area of research to test, validate, and modify models that influence the way knowledge is conceptualized.

Conclusion

The field of K-12 online distance education is continuing to expand and grow, specifically, through the proliferation of virtual schools throughout the United States. Increasingly, a growing number of educators find themselves teaching in a virtual classroom. The purpose of this study was to gather data related to K-12 online teachers’ views of their knowledge in relationship to the TPACK conceptual framework. Respondents’ ratings of their own knowledge relative to the TPACK framework are highest among the domains of pedagogy, content, and pedagogical content, indicating that they, overall, felt very good about their knowledge related to these domains. Correlations among each of the domains within the TPACK framework related to knowledge revealed a small correlation between the domains technology and pedagogy, as well as technology and content (.289 and .323, respectively). In contrast, there was a large correlation between pedagogy and content (.690).

This study attempted to use the TPACK model as a framework for measuring the perceptions of a group of teachers who theoretically had knowledge related to each of the represented domains. However, this has proved to be a somewhat difficult and complex process.What is evident from the results of this study is that teachers feel strongly about their ability to deal with issues related to pedagogy and content and more hesitant when it comes to issues dealing with technology. This result is likely related to the activities that traditional teachers do on a daily basis, such as planning lessons, using teaching strategies to convey content, mapping content to district standards, and assessing students’ understanding of various topics, which are the emphasis of teacher education programs.

These findings have important implications, especially for the field of teacher preparation, which will need to adapt to prepare future teachers for settings other than the traditional classroom. These setting include the integration of technology throughout content courses, as well as field experiences where the use of technology can be contextualized. Through this study, a better understanding of K-12 online teachers’ views of knowledge in relationship to TPACK now exists, in addition to beginning to measure aspects of the TPACK framework itself. Although there is a vast amount of future research to be conducted in this area, the current study represents a first step in examining a useful organizational structure describing the complex relationship between and among the essential areas of technology, pedagogy, and content.

 

References

Archambault, L. M., & Crippen, K. J. (2006). The preparation and perspective of online K-12 teachers in Nevada. In T. Reeves & S. Yamashita (Eds.), Proceedings of the World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1836-1841). Chesapeake, VA: Association for the Advancement of Computers in Education.

Cavanaugh, C., & Blomeyer, B. (2007). What works in K-12 online learning. Eugene, OR: International Society for Technology in Education.

Christensen, C. M., & Horn, M. B. (2008). How do we transform our schools? [Electronic version]. Education Next, 13-19. Retrieved from http://www.hoover.org/publications/ednext/18606339.html

Czaja, R., & Blair, J. (2005). Designing surveys:  A guide to decisions and procedures (2nd ed.). Thousand Oaks, CA: Sage Publications.

Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). New York: Wiley.

Fowler, J. (2002). Survey research methods (3rd ed.). Newbury Park, CA: SAGE.

Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: An introduction (7th ed.). Boston: Pearson Education.

Gess-Newsome, J., & Lederman N.G. (Eds) (1999) Examining pedagogical content knowledge. Dordrecht: Kluwer.

Koehler, M., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research, 32(2), 131-152.

Koehler, M., & Mishra, P. (2008). Introducing TPCK. In AACTE Committee on Innovation and Technology (Ed.), Handbook of technological pedagogical content knowledge (TPCK). New York: Routledge.

Manfreda, K. L., Bosnjak, M., Berzelak, J., Hass, I., & Vehovar, V. (2008). Web surveys versus other survey modes: A meta-analysis comparing response rate. International Journal of Market Research, 50(1), 79-104.

McEwan, H., & Bull, B. (1991). The pedagogic nature of subject matter knowledge. American Educational Research Journal, 28(2), 316-334.

McLeod, S., Hughes, J. E., Brown, R., Choi, J., & Maeda, Y. (2005). Algebra achievement in virtual and traditional schools. Naperville, IL: North Central Regional Educational Laboratory, Learning Point Associates.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.

Niess, M. L. (2005). Preparing teachers to teach science and mathematics with technology: Developing a technology pedagogical content knowledge. Teaching and Teacher Education, 21(5), 509-523.

Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park, CA: Sage Publications, Inc.

Shih, T., & Fan, X. (2008). Comparing response rates from Web and mail surveys: A meta-analysis.Field Methods, 20(3), 249-271.

Shulman, L. (1986). Paradigms and research programs in the study of teaching: A contemporary perspective. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 3-36). New York: MacMillan.

Watson, J., & Ryan, J. (2006). Keeping pace with k-12 online learning: A review of state-level policy and practice. Naperville, IL: Learning Point Associates.

Author Note:

Leanna Archambault
Arizona State University
Email: [email protected]

Kent Crippen
University of Nevada Las Vegas
Email: [email protected]

 

 


 

Appendix A

Survey Items by Domain

Pedagogical Knowledge

(j) My ability to determine a particular strategy best suited to teach a specific concept.

(c) My ability to use a variety of teaching strategies to relate various concepts to students.

(r) My ability to adjust teaching methodology based on student performance/feedback.

Technological Knowledge

(a) My ability to troubleshoot technical problems associated with hardware (e.g., network connections).

(g) My ability to address various computer issues related to software (e.g., downloading appropriate plug-ins, installing programs).

(q) My ability to assist students with troubleshooting technical problems with their personal computers.

Content Knowledge

(b) My ability to create materials that map to specific district/state standards.

(d) My ability to decide on the scope of concepts taught within in my class.

(m) My ability to plan the sequence of concepts taught within my class.

Technological Content Knowledge

(o) My ability to use technological representations (i.e. multimedia, visual demonstrations, etc.) to demonstrate specific concepts in my content area).

(t) My ability to implement district curriculum in an online environment.

(v) My ability to use various courseware programs to deliver instruction (e.g., Blackboard, Centra).

Pedagogical Content Knowledge

(f) My ability to distinguish between correct and incorrect problem solving attempts by students.

(i) My ability to anticipate likely student misconceptions within a particular topic.

(s) My ability to comfortably produce lesson plans with an appreciation for the topic.

(u) My ability to assist students in noticing connections between various concepts in a curriculum.

Technological Pedagogical Knowledge

(h) My ability to create an online environment which allows students to build new knowledge and skills.

(l) My ability to implement different methods of teaching online

(n) My ability to moderate online interactivity among students

(p) My ability to encourage online interactivity among students

Technological Pedagogical Content Knowledge

(e) My ability to use online student assessment to modify instruction

(k) My ability to use technology to predict students’ skill/understanding of a particular topic

(w) My ability to use technology to create effective representations of content that depart from textbook knowledge

(x) My ability to meet the overall demands of online teaching

 

Loading