Abebe, F., & Trainin, G. (2024). Predicting technological, pedagogical, and content knowledge (TPACK) formation in elementary math education. Contemporary Issues in Technology and Teacher Education, 24(2). https://citejournal.org/volume-24/issue-2-24/mathematics/predicting-technology-pedagogy-and-content-knowledge-formation-in-elementary-math-education

Predicting Technological, Pedagogical, and Content Knowledge (TPACK) Formation in Elementary Math Education

by Fitsum Abebe, The Grainger College of Engineering University of Illinois Urbana-Champagne; & Guy Trainin, University of Nebraska–Lincoln


This study validated measures for elementary preservice teachers’ technological, pedagogical, and content knowledge (TPACK) for elementary mathematics and evaluated the extent to which technology knowledge, pedagogy knowledge, and content knowledge were related to the formation of TPACK. The study was guided by the TPACK framework and adopted a widely used survey instrument. Participants were elementary preservice teachers at the end of a mathematics method class at a midwestern US teacher preparation program. The study used confirmatory factor analysis and structural equation modeling to analyze measurement and predictive models. The confirmatory factor analysis validated a four-factor correlated measure of technological knowledge, pedagogical knowledge, content knowledge, and TPACK. The structural equation model indicated technological knowledge and pedagogical knowledge significantly predicted TPACK in elementary mathematics, but content knowledge did not. Preservice elementary school teachers indicated that their technological expertise was lower than their pedagogical knowledge, content knowledge, and TPACK. The results underscore the importance of strengthening TPACK in elementary teacher preparation programs with a focus on mathematics, enhancing the proficiency of preservice teachers in utilizing technology for effective mathematics teaching. This is particularly critical due to rapid technological change and shifts in students’ needs and competencies.

The importance of developing technology integration in K-12 is a growing focus for teacher educators and teacher education programs (Clausen, 2020; Trainin et al., 2018). Teacher education programs are being challenged to promote technology integration in an era of multiple demands while facing budgetary constraints (Borthwick & Hansen, 2017; Voithofer & Nelson, 2021). Voithofer and Nelson observed that teacher educators are progressively incorporating technology into various aspects of the curriculum. They also highlighted a relatively modest level of adoption of technological, pedagogical, and content knowledge (TPACK) and significant variability in conceptualizations of TPACK.

Another evidence that teacher education programs are struggling to innovate is the fact that less than a hundred teacher education programs out of over 2,000 in the US are signed on to the education preparation programs for digital equity and transformation pledge (International Society for Technology in Education (ISTE), 2023). To understand how teacher education programs prepare K-12 teachers with effective technology integration skills, we explored the impacts of an award-winning elementary teacher education program (Chirichella, 2022; Trainin et al., 2018). We examined the program’s impact on preservice TPACK (Koehler & Mishra, 2009) in elementary mathematics, a significantly less studied area.        

Theoretical Framework

Mishra and Koehler (2006) explained that TPACK represents knowledge of effective technology integration to teach content with meaningful pedagogical strategies. The TPACK framework has become widely accepted among researchers and practitioners, since it explains the complex interrelations among technology, pedagogy, and content knowledge that guide educators to integrate technology in classrooms (Zou et al., 2022).Koehler & Mishra (2009) argued that teachers should have knowledge and competencies in three main components, content knowledge (CK), pedagogical knowledge (PK), and technological knowledge (TK), to teach technology integration and facilitate learning activities.

Following the original papers by Mishra and Koehler (2006), a wave of studies have been conducted using the TPACK framework to examine the competencies of preservice and in-service teachers’ ability and efficacy in integrating (or infusing) technology into instruction (Malik et al., 2019; Wang et al., 2018). While TPACK has become a popular construct, relatively few quantitative studies were conducted with preservice TPACK in teaching elementary mathematics.

This study employed a quantitative approach to provide precise measurements of different knowledge components, allowing for comparisons and statistical inferences to integrate technology effectively in elementary mathematics teaching. Validating and establishing psychometric properties for preservice teachers creates a set of tools that can be easily scaled up to evaluate and highlight areas where preservice teachers might need additional support. This can guide teacher education programs in tailoring their curriculum to address these specific needs in technology integration in teacher education programs.

The quantitative results could be used in comparative and longitudinal studies in technology integration practices at a low cost. Further, the quantitative findings can inform policy decisions in teacher education programs, and clear quantitative evidence regarding areas of strength and weakness in TPACK can guide curriculum development and instructional strategies.

Review of Literature

Measuring TPACK

A significant number of studies used TPACK as a theoretical framework to understand preservice teachers’ technology integration knowledge; however, some studies have raised concerns about the ability to measure the different constructs in TPACK (e.g., Archambault & Barnett, 2010; Scherer, 2018; Schmidt et al., 2009). Koehler et al. (2012) reviewed a wide range of approaches to measuring TPACK and identified 141 different instruments. They concluded that “many of the TPACK instruments did a poor job of addressing the issues of reliability and validity” (p. 25). They found that 69% of studies did not establish reliability, and 90% did not establish validity.

Part of the challenge in examining the validity and reliability of the TPACK framework stems from the difficulty in identifying all seven knowledge domains that the framework proposed. Schmidt et al. (2009) were the first to measure TPACK with preservice teachers. They validated their instrument with 124 preservice teachers, showing acceptable internal consistency and factor loading. However, this initial paper did not include much about convergent or discriminant validity. To accept the constructs as originally theorized, replication studies were needed with considerably larger sample sizes and special attention to creating a consistent factor structure. Subsequent studies have not been able to consistently identify all seven theorized knowledge domains (e.g., Scherer et al., 2017; Zelkowski et al., 2013).

 Zelkowski et al. (2013) developed a TPACK instrument for secondary mathematics preservice teachers. They analyzed 300 self-reports from 15 institutions using exploratory and confirmatory factor analyses, showing that the confirmatory model identified only four knowledge domains: technological knowledge (TK), pedagogical knowledge (PK), content knowledge (CK), and TPACK.

Scherer et al. (2017) investigated preservice teachers’ (n = 665) self-efficacy in technology related to TPACK dimensions of TK, technological content knowledge (TCK), technological pedagogical knowledge (TPK), and TPACK. After comparing different confirmatory measurement models, they applied a bi-factor model identifying a general TPACK factor and only TK as a “group factor,” while the other four subdomains were not identified.

In a study conducted with preservice special education elementary teachers on integrating technology in mathematics and science instruction, Kaplon-Schilis and Lyublinskaya (2020) conducted exploratory and confirmatory factor analyses. The results showed that the TPACK construct was independent of TK, PK, and CK in mathematics and science.

In summary, TPACK studies have not established the validity and reliability of the full factor structure. Different studies identified different TPACK domains and little consistency was found between studies. As a result, we decided to focus this study on exploring the most often validated constructs: TK, PK, CK, and TPACK. In this study, we employed a higher level quantitative procedure, confirmatory factor analysis (CFA), to confirm the factor structure of a TPACK measure that has been hypothesized based on TPACK theoretical grounds and previous research. It provided evidence for the validity of the four-factor structure of the TPACK measure, thereby enhancing the credibility and robustness of the study’s findings of the predictive model.

Elementary Mathematics TPACK

As the concept of TPACK emerged, researchers started using the framework to explore TPACK in different school knowledge domains, including elementary mathematics. Niess et al. (2010) used a qualitative interpretive case study with 12 K-8 mathematics and science teachers. The authors used the data to describe the trajectory of in-service teachers’ TPACK growth during a semester, starting with accepting TPACK, adapting TPACK, and finally exploring TPACK.

Polly (2014) explored the development of TPACK among in-service teachers engaged in technology and mathematics workshops within their district. Over the course of a year, the authors conducted a study with in-service teachers instructing lower-level elementary mathematics. They employed inductive qualitative analyses following the collection of observational data on the technologies utilized, mathematical tasks assigned, and challenges encountered by the teachers during technology integration. Findings revealed a desire and necessity among in-service teachers to enhance their understanding and utilization of technologies for mathematics instruction.

In another study, Urbina and Polly (2017) examined how elementary school teachers integrated technology into their mathematics teaching in one-to-one device integration. Classroom observations indicated that students rarely used their devices. Observations and interviews with the teachers revealed that when teachers used devices, it was for low-level computation activities only. The researchers stressed the importance of supporting teachers who integrate technology by expanding their TPACK in teaching elementary mathematics.

Musgrove et al. (2021) examined the role of in-service elementary teachers’ TPACK (n = 242) in one-to-one technology use in different subjects, including mathematics. The results indicated that TPACK significantly moderated the ease of use and perceived usefulness of one-to-one technology integration. They, therefore, concluded that elementary teachers’ TPACK could strengthen one-to-one technology integration for mathematics.

In summary, these studies demonstrate the significance of cultivating TPACK among in-service teachers as a crucial element in effectively integrating technology when teaching elementary mathematics. TPACK is not a fixed skill set but rather a dynamic and evolving proficiency that demands continuous development and enhancement. It is crucial to cultivate TPACK among preservice teachers and build a mindset of lifelong learning and professional adaptability. This approach is essential for enabling preservice teachers to remain updated on emerging technologies and pedagogical strategies for the content they teach throughout their teaching careers.

This study focused on measures of preservice TPACK, which educator preparation programs can employ to assess TPACK development. It also delved into the interconnectedness among the four TPACK knowledge domains and examined factors that may impact preservice TPACK. Researchers could replicate this study with in-service elementary teachers, potentially enhancing professional development in elementary mathematics TPACK.

Mathematics TPACK in Preservice Teachers

Agyei and Voogt (2015) examined the development of TPACK among preservice teachers enrolled in an instructional technology semester course with a focus on mathematics content. The cohort of preservice teachers (n = 104) was instructed to integrate theory and practice while collaborating on lesson preparation. Through analysis of data gathered from lesson plans, observations, and self-reports regarding TPACK and attitudes toward technology, the authors found evidence of enhanced TPACK among the preservice teachers. Notably, the preservice teachers attributed the most significant contribution to their TPACK development to feedback received from their teacher educators and peers.

Bulut and Işıksal (2019) examined preservice elementary mathematics teachers’ perceptions of TPACK of geometry using quantitative research methods. The results showed that preservice elementary mathematics teachers perceived higher competency in CK, PK, and pedagogical content knowledge (PCK). The preservice teachers felt less competent in their TCK and TPACK. Kaplon-Schilis and Lyublinskaya (2020) conducted a multiple linear regression analysis in their study of special education teachers. Results showed that TK, PK, and CK were not significant predictors of TPACK.

Trainin et al. (2018) reported on a multicohort study spanning 5 years, which investigated the development of TPACK among elementary preservice teachers within the framework of a redesigned teacher preparation program. The study analyzed mixed-method data from 891 participants, revealing a consistent increase in TPACK, TK, and the frequency of technology integration in mathematics across different cohorts. Additionally, the results suggested that the demonstration of TPACK practices by teacher educators and cooperating teachers had a beneficial effect on the development of TPACK among preservice teachers.

In summary, relatively few quantitative investigations in TPACK have addressed elementary mathematics content. The few studies that employed quantitative or mixed methods to assess preservice teachers’ TPACK development in the mathematics content area (e.g., Bulut & Işıksal, 2019; Lee & Hollebrands, 2008; Kaplon-Schilis & Lyublinskaya, 2020; Smith et al., 2016) had somewhat consistent results that call for further study.

Integrating technology into instruction is a critical teaching skill. The TPACK framework illustrates that the intersection of technology, pedagogy, and content knowledge must be a key focus. However, measurement challenges persist due to the complex interrelation of these domains and a lack of reliable and valid assessment of all dimensions. Despite this challenge, researchers have used TPACK successfully to study technology integration in various subjects, including elementary mathematics, with both preservice and in-service teachers. TPACK can enhance technology integration, but more quantitative research is needed to better understand and effectively apply TPACK in teacher education. As a result, this study sought to answer the following questions:

  1.  What is the degree to which elementary preservice teachers’ TK, PK, CK in mathematics, and TPACK are identified as four interrelated constructs?
  2. To what extent do TK, PK, and CK in mathematics relate to elementary preservice teachers’ TPACK in the context of a science, technology, engineering, and mathematics (STEM) oriented elementary teacher education program?

Key Concepts and Terminologies 

Technological Knowledge

Technological knowledge refers to elementary preservice teachers’ knowledge of using technology in their daily teaching activities.

Pedagogical Knowledge

Pedagogical knowledge refers to instructional strategy knowledge that preservice teachers use in elementary school classrooms.

Content Knowledge

Content knowledge refers to elementary mathematics content knowledge.

Technological, Pedagogical, and Content Knowledge

Technological, Pedagogical, and Content Knowledge refers to preservice teachers’ technology integration knowledge in elementary school classrooms.



The participants in this study (n =239) were from four cohorts of undergraduate preservice elementary teachers in a teacher education program at a large Midwestern university. Ninety-five percent were 19- to 22-years-old, 90% were female, and 91% were white non-Hispanic, 6% identified as Hispanic, 2% as Black, and 1% as other ethnicity/race. These demographics are common in traditional elementary teacher education programs in the US (National Center for Education Statistics, 2023).

STEM Block and Technology Experience

The participants were preservice teachers who completed four semesters of general education requirements in a 4-year certification program. The preservice teachers were in the 3rd year of their undergraduate degree and their first semester in the professional phase, referred to as “STEM block.” The STEM block included mathematics, mathematics pedagogy, science pedagogy, an instructional technology course, and a corresponding practicum experience (Thomas et al., 2019). The STEM elements of the program have been developed for over 10 years and have won recognition from the National Science Foundation for their mathematics content and pedagogy and their innovative practices from the AACTE. Trainin et al. (2018) describe the program’s development.

The instructional technology course introduced students to TPACK by blending practical, hands-on learning with pedagogical and social justice perspectives. It established a foundational understanding of innovative learning technologies, providing a basis for pedagogy instructors to develop TPACK within their content areas. The course consistently evolved to align with changing devices, software, and the affordances of educational technologies (Thomas & Trainin, 2019).

The integrated STEM semester incorporated established models of STEM integration by linking STEM disciplines within communities of practice. It defined STEM integration as a cohesive effort across all STEM areas, emphasizing purposeful connections to enhance learning. Constructionist theory underpinned the learning experiences, emphasizing play, creativity, and minimalistic teaching to foster exploratory learning among preservice elementary teachers, thus equipping them to apply similar methods with elementary students.

The STEM block featured two main themes woven throughout the course, robotics/coding and making. Robotics was primarily highlighted in the instructional learning technology course and supported in mathematics and science methods courses, encouraging elementary preservice teachers to consider diverse applications in standards and classroom settings. The making theme promoted collaboration and integration across disciplines, culminating in an engineering design challenge, differentiating from other assignments by being integrated into scheduled class activities rather than as external homework.

The mathematics course cluster focused on teaching and learning mathematics in a pedagogy course, a content course about mathematical thinking, and a field experience course. The mathematics pedagogy method course exposed students to a diverse array of technologies aimed at enhancing mathematics education. Instructors demonstrated and integrated tools like screencasting, virtual manipulatives, and math-oriented games into the curriculum. Throughout the semester, technology integration was methodically scaffolded within lesson planning, teaching sessions, and reflective assignments. This approach aimed to guide students in selecting technologies that align with effective mathematics teaching practices and curriculum objectives (Thomas et al., 2019).

Data Collection

The survey data were collected using online survey instruments during the mathematics pedagogy course at the end of the semester. We adopted the Schmidt et al. (2009) self-report instrument. The original survey was designed for a seven-dimensional TPACK scale, covering four content areas (mathematics, science, social studies, and literacy) with 45 items. Each knowledge domain had between four and 14 items.

In this study, we focused on the four main knowledge domains: TK, CK, PK, and TPACK, as they relate to elementary mathematics. We chose these because numerous studies in different contexts could not identify all the seven constructs (e.g., Voogt et al., 2013; Zelkowski et al., 2013), and some questioned the existence of subdomains (e.g., Zou et al., 2022). The items used a 5‐point Likert scale from strongly disagree to strongly agree.

Technological Knowledge

The TK scale has six items, for example, “I know how to solve my own technical problems,” and “I can learn technology easily.”

Pedagogical Knowledge

The PK scale has seven items, for example, “I know how to assess student performance in a classroom,” and “I can adapt my teaching based upon what students currently understand or do not understand.”

Content Knowledge

The CK scale has three items, for example, “I have sufficient knowledge about mathematics,” and “I can use a mathematical way of thinking.”

Technological, Pedagogical, and Content Knowledge

The TPACK scale has five items, for example, “I can select technologies to use in my classroom that enhance what I teach, how I teach, and what students learn,” and “I can use strategies that combine content, technologies, and teaching approaches that I learned about in my coursework in my classroom.”

Data Analysis

We conducted descriptive statistics, scale reliability, and correlations to ensure the quality of data in our research and used the data to validate the TPACK measurement and predictive models. We used CFA to validate TPACK measures and structural equation modeling (SEM) to predict TPACK. We used SPSS® 26 Statistics for descriptive statistics, scale reliability, and correlations.  Mplus 7.4 (Muthén & Muthén, 1998-2015) was used to analyze the CFA and SEM.

When using CFA and SEM to analyze the results, we used a minimum of three items for each knowledge domain (TK, PK, CK, and TPACK). In CFA and SEM, using less than three indicators may not adequately capture the complexity and nuances of the latent construct, leading to less reliable estimates. Having at least three items (indicators) per latent variable allows the model to be identified and enhances the reliability and validity of latent constructs. Thus, the parameters (including factor loadings, variances, and covariances) in the model can be estimated without ambiguity.

For these and other reasons (e.g., increased power and precision of parameter estimates; cf. (Marsh, Hau, Balla, & Grayson, 1998), methodologists recommend that latent variables be defined by a minimum of three indicators to avoid this possible source of under-identification. (Brown, 2015, pp. 60-61)


Descriptive Statistics and Scale Reliabilities

We conducted item-level descriptive statistics and correlational analyses as a preliminary step to test the measurement models. The item analysis results showed that skewness was within the acceptable range, between -0.368 and -1.293 (SE = 0.157). The kurtosis values were also within the acceptable range, between -0.560 and 4.378 (SE = 0.314). The skewness indicates a slight right (negatively) skew common in self-report data, and the kurtosis value suggests a moderately peaked distribution (as was found in Trainin et al., 2018).

Kline (2016) reviewed relevant studies and proposed that, while there are no precise benchmarks for the extent of moderate normality, empirical results can provide a guideline. The author suggested that absolute values of 3 for skewness and 10 for kurtosis do not significantly impact the analyses. Further, this study used robust maximum likelihood to correct the slight nonnormality of data during SEM analysis (as recommended in Brown, 2015; Kline, 2016).

The Pearson correlations revealed the existence of a significant correlation between most of the items, which ranged between 0.131 and 0.714 (see Table 1). Tabachnick and Fidell (2019) noted that correlation values higher than .7 indicate multicollinearity, and values close to 1 are a sign of singularity. In this study, the correlations between items were in the accepted range without indicating multicollinearity or singularity.

Table 1
Correlation Matrix for All Items in the Study (n =239)


The reliability of the four constructs (latent variables) was calculated using Cronbach alpha and were all above .8, well within the accepted range for research. The latent variable correlations were in the acceptable range, between .401 and .815. The highest correlation was observed between PK and TPACK latent variables r = .815. The lowest correlation was observed between TK and PK, which was .401 (see Table 2). The standardized correlations result (correlation < .85) suggested that the latent variables were sufficiently distinct with no multicollinearity concerns to examine them as separate constructs. (Clark & Watson, 1995; Henseler et al., 2015; Kline, 2016). TK is observed as the lowest mean. The observed mean of the latent variables TK was M = 3.82, SD =.85. The highest observed mean of the latent variables was CK, M = 4.19, SD = .59, followed by PK, M = 4.18, SD = .59, and TPACK, M = 4.15, SD = .64 (see Table 2).

Table 2
TPACK Construct Descriptives, Correlations, and Reliabilities

ConstructM (SD)1234
1. TK (Technological Knowledge)3.82 (.85)(.880).422.401.495
2. CK (Content Knowledge)4.19 (.59) (.883).776.695
3. PK (Pedagogical Knowledge)4.18 (.59)  (.892).815
4. TPACK4.15 (.64)   (.812)
Note. Reliability is expressed on the diagonals. All correlations are statistically significant at p < .001. (n = 239)

Confirmatory Factor Analysis

 We used the accepted indices to examine the relative fit of the data to the framework, including the Confirmatory Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA), and Standardized Root Mean Square Residual (SRMR). Models were considered to have a mediocre or good model fit with CFI/ TLI equal to or above .95, RMSEA equal to or lower than .05, and SRMR equal to or lower than .05 (Brown, 2015; Hu & Bentler, 1999; Keith, 2015). We used the 90% confidence interval to report the results to make the estimated value relatively stable (Brown, 2015; Teigen & Jørgensen, 2005). Brown stated, “Additional support for the fit of the solution would be evidenced by a 90% confidence interval of the RMSEA whose upper limit is below these cutoff values (e.g., 0.08)” (p. 74).

We used the robust maximum likelihood estimation for CFA analyses to address the small observed violations of multivariate normality assumptions. Indicators of the hypothesized latent factor in the measurement model were checked for significant values.

Measurement Model

To address the first research question, we compared a unidimensional measurement model and the four-dimensional measurement model previously reported in the literature (e.g., Zelkowski et al., 2013). 

Unidimensional Model

The first CFA model was unidimensional, with all 21 indicators loaded on one factor. The model indices were RMSEA C.I [.119 -.136] = .127, CFI = .704, TLI = .671and SRMR =.113 (see Table 3). All the factor loading for specific domains ranged from .427 to .778. All factor loadings were standardized and significant based on p < .001. The results revealed that the unidimensional measurement model did not have an adequate global fit; all indices were outside the accepted range.

Four-Dimensional Model

Following the procedure suggested by Brown (2015), we started by testing each of the four dimensions TK, CK, PK, and TPACK separately. CK was a saturated (just identified) model since it had only three indicators. TK with six indicators has adequate global fit with three out of four criteria meeting the fit threshold (RMSEA = .102 90% CI [.064 -.142], CFI = .956, TLI = .926, SRMR = .034). PK with seven indicators had good model fit indices (RMSEA < .01 CI [.000 -.061], CFI = 1.000, TLI = 1.000 SRMR = . 008), and TPACK with five indicators had good model fit indices (RMSEA < .01 [.000 -.073], CFI = 1.000, TLI = 1.000, SRMR = .020). 

Next, we examined the standardized factor loadings for each construct. The factor loading for TK was between .717 and .793, the factor loading for PK ranged from .645 to .828, and the factor loading for TPACK ranged from .558 to .825. All factor loadings were significant based on p < .001.

Finally, we tested the overall four-factor measurement model. The results revealed a good global fit with RMSEA= .044 CI [.033-.055], CFI = .962, TLI = .956,and SRMR = .048. Indicator fit for the overall model was examined, and standardized factor loadings ranged from .558 to .875. The factor loading for TK was between .717 and .793, the factor loading for CK ranged from .825 to .875, the factor loading for PK ranged from .645 to .828, and the factor loading for TPACK ranged from .558 to .825. All factor loadings were significant p < .001 (see Figure 1).

Figure 1
Four-Dimensional TPACK Measurement Model

Note: TK = Technological Knowledge, CK = Content Knowledge, PK = Pedagogical Content Knowledge, and TPACK = Technological, Pedagogical, and Content Knowledge (see appendix for survey items). The straight arrow indicates factor loading on each survey item, the rounding arrow indicates covariance between two knowledge domains, and the short arrow indicates the residuals.

The latent variable correlations ranged between .401 and .815. The highest correlation was observed between PK and TPACK (.815). The lowest correlation was observed between TK and PK (.401). The standardized correlations result suggest that the latent variables were sufficiently distinct, with no multicollinearity concerns to examine them as separate constructs (Henseler et al., 2015; Kline, 2016).

Model Comparison

The four-dimensional model had a significantly better model fit than the unidimensional technology integration model (Brown, 2015; Keith, 2015; see Table 3). We calculated the comparison between the two nested models, Cohen’s effect size w = 0.65, ∆χ2 = 597.777, and ∆df  = 6, which is another indication that the four correlated factors had a better fit (Dziak et al., 2014).

Table 3
Confirmatory Factor Analysis for Overall Model Fits (n =239)

Unidimensional866.880189.704.671.127[.119- .136].113
Note: CFI = Comparative Fit Index; RMSEA = Root Mean Square Error of approximation; CI = Confidence Interval; TLI = Tucker Lewis Index; SRMR = Standardized Root Mean Square Residual.

The analysis includes testing the fit of each dimension separately, examining factor loadings and latent variable correlations, and comparing the unidimensional model to a four-dimensional model to establish its superiority in terms of statistical fit. The analysis validated the four-dimensional TPACK model as a reliable and stable model describing preservice teacher mathematics TPACK. It also allows for a more dynamic assessment of the relationships between the key constructs in SEM.

Structural Equation Model

We used an SEM to answer the second research question and reported global fit statistics CFI/ TLI, RMSEA, and SRMR. We explored the conceptual model based on extant literature (Figure 2). In the model, TK, PK, and CK were independent latent variables, and TPACK was the dependent latent variable.

Figure 2
TPACK Conceptual Model

Note: TK = Technological Knowledge, CK = Content Knowledge, PK = Pedagogical Content Knowledge, and TPACK = Technological, Pedagogical, and Content Knowledge (see appendix for survey items).

Estimating the model yielded good global fit, RMSEA 90% CI [.033-.055] = .048, CFI = .962, TLI = .956, SRMR = .048. The factor loadings for TK were between .721 and .792, and the factor loadings for CK ranged from .823 to .878. The factor loadings for PK ranged from .642 to .815, and the factor loadings for TPACK ranged from .560 to .801. All factor loadings were significant at p < .001 and within the acceptable range (see Figure 3).

Figure 3
Results of Hypothesized TPACK Predictive Model Based on Post Measures

Note. Standardized coefficients are reported for each path. Solid lines indicate significant paths (p < .001). The dashed line indicates no-significant paths. TK and PK are predictors of TPACK. CK is not a predictor of TPACK. TK = Technological Knowledge, CK = Content Knowledge, PK = Pedagogical Content Knowledge, and TPACK = Technological, Pedagogical, and Content Knowledge (see appendix for survey item).

Standardized path coefficients are shown in full in Figure 3, and unstandardized coefficients are reported in Table 4. Results revealed that TK and PK contributed significantly to TPACK (β = .187, p < .01; β = .659,  p < .001, respectively). However, CK did not predict TPACK significantly (β = .105, p > .05). The model explained 67.3%of the variance in TPACK.

Table 4
Unstandardized Parameter Estimates for Tested Structural Model From Mplus Result (n =239)

Latent VariablesRelationship (BY, WITH)EstimateSE
TK            Factor loading (BY)  TK110
                                    TK41.005** 0.093  
CKFactor loading (BY)  CK110
PKFactor loading (BY)  PK110
                                    PK7 1.015****0.092
TPACK Factor loading (BY) TP110
TPACKCorrelation (WITH) TK0.124*0.041
CK                                   TK0.144**0.041
PK                                   TK0.114*0.037
Significant *< .05. ** < .001.

Note: The first items for each latent variable were scaled to 1. Note: TK = Technological Knowledge and has six items (TK1-TK6), CK = Content Knowledge and has three items (CK1-CK3), PK = Pedagogical Content Knowledge and has six items (PK1-PK6), and TPACK = Technology, Pedagogy, and Content Knowledge and has five items (TP1-TP5). (See appendix for survey items.)

The results indicate that the empirical data strongly supported the conceptual model of TPACK, which suggests that the latent variables have a strong explanatory power regarding TPACK. This indicates a stronger influence of the model’s components (TK, PK, and CK) on the measurement of TPACK.

The TK and PK latent variables have a statistically significant impact on TPACK. This means that changes in TK and PK corresponded with changes in TPACK. Conversely, the variable representing CK did not have a statistically significant impact on TPACK. This suggests that changes in CK did not correspond to changes in TPACK scores.


This study aimed to contribute to the limited body of quantitative evidence, examining elementary preservice teachers’ TPACK in mathematics instruction. We found that preservice elementary teachers reported that their TK was lower than PK, CK, and TPACK. Our findings confirm the results of previous studies of elementary mathematics preservice and in-service teachers (e.g., Bulut & Işıksal, 2019; Hill & Uribe-Florez, 2020; Jang & Tsai, 2012; Marbán & Sintema, 2021). The lower TK might be because preservice and in-service elementary teachers did not get enough exposure to developing technology integration in mathematics and did not experiment with technology with their students. In addition, preservice teachers may have had less access to technology in teaching mathematics (Agyei & Voogt, 2015; Chai et al., 2010).

Preservice teachers reported that mathematics CK and PK were the highest. This result aligned with Hill and Uribe-Florez (2020), who found a higher mean PK followed by CK in their study of middle and high school mathematics teachers. These similar results might be because there are high expectations and effective professional development opportunities in enhancing PK and CK of both in-service and preservice teachers in elementary and high school mathematics teaching.

By analyzing the data collected through a validated TPACK questionnaire, we confirmed a four-factor model, shedding light on the complex nature of TPACK in the context of elementary mathematics. The findings provide empirical evidence about the relationships between the four TPACK factors: TK, PK, CK, and the integration of these domains into TPACK. These results partially align with the theoretical underpinnings of the TPACK framework (Koehler & Mishra, 2009), which posits that effective technology integration in teaching requires a synergistic combination of technological understanding, pedagogical strategies, and content expertise. However, when we examined the prediction model, content knowledge in mathematics was not a significant predictor of mathematics TPACK.

Our findings align with previous research on mathematics TPACK in preservice teachers, specifically the work conducted by Zelkowski et al.(2013), whose study validated a similar factor structure in the TPACK framework. There was a notable difference between our study and Zelkowski et al. findings regarding preservice secondary mathematics teachers. In our study, we observed a moderate correlation between PK and TK in preservice elementary teachers. This contrasts with the Zelikowski et al. (2013) results, which did not indicate a significant correlation between these two factors. One explanation for this difference could be attributed to the participants’ backgrounds (elementary vs. secondary preservice teachers) and levels of PK. As our study focused on preservice elementary teachers, their PK may have been more generalized, encompassing a broader range of teaching practices across various subjects. Consequently, this broader PK base may have facilitated a better connection with their TK, resulting in a moderate correlation.

Similarly, Kaplon-Schilis and Lyublinskaya (2020) investigated the relationship between the same four domains of the TPACK framework: TK, PK, CK, and TPACK, as they relate to mathematics and science. Their study focused on preservice special education teachers. The study’s findings revealed significant relationships among the five domains of the TPACK framework. Specifically, TK, PK, CK in mathematics, CK in science, and TPACK. The similarity between our study’s results and those of Kaplon-Schilis and Lyublinskaya may be because the sample in their study (unlike Zelikowski et al., 2013) was similar to ours. Both elementary and special education preservice teachers have limited mathematics and science education opportunities.

One important implication of this study is the recognition that elementary generalist preservice teachers, who are responsible for teaching multiple subjects, including mathematics, may face unique challenges in developing their TPACK. The findings reinforce the notion that elementary preservice teachers require targeted support and professional development opportunities to enhance their instructional practices in mathematics. Recognizing and addressing the specific needs of elementary preservice teachers can contribute to improving mathematics education at the foundational level through the affordances of innovative technologies, ultimately benefiting students’ long-term mathematical achievement.

The correlated four-factor model provides a nuanced understanding of how TK, PK, CK, and TPACK intersect and influence each other in the context of elementary mathematics teaching. This understanding can guide the design of comprehensive teacher education programs that equip future teachers with the necessary skills and knowledge to integrate technology effectively in mathematics instruction. Additionally, professional development initiatives can be tailored to address the specific TPACK components that elementary preservice teachers find challenging, such as TK and pedagogical strategies in mathematics.

After ensuring a good measurement model with adequate model fit of the four-factor TPACK, the SEM model results revealed that TK and PK impacted TPACK, while content knowledge of elementary mathematics content did not predict TPACK. TPACK was also more influenced by PK than TK.

This result indicates that elementary preservice teachers’ perception of CK was unrelated to their TPACK. Celik et al. (2014) studied the relationships among TPACK factors using SEM with a mix of elementary and secondary preservice teachers in Turkey. Unlike the results of our study, their results suggested that PK and CK significantly predicted TPACK, while TK did not. The difference between the results may be related to the fact that Celik et al. did not address mathematics content knowledge specifically.

As we noted earlier, Kaplon-Schilis and Lyublinskaya (2020) revealed significant relationships among the domains of the TPACK framework like the current study. However, multiple linear regression showed that TK, PK, and CK did not predict TPACK. This study confirms the lack of prediction from CK but conflicts with our results that showed that PK and TK were predictors of TPACK. This result may be due to their sample of preservice teachers who focused on serving students with disabilities and were less focused on general classroom pedagogy and content. Conversely, this differential result may be a result of their methodology, and their data may have presented a different result if they had used an SEM approach.

In the TPACK framework, these four dimensions intersect and overlap, forming the basis for effective teaching with technology. Preservice teachers’ abilities to integrate these dimensions and navigate their intersections are crucial for successful technology integration in educational settings. The TPACK framework emphasizes the synergy between technological, pedagogical, and content knowledge to enhance teaching and learning experiences.

The findings of this study underscored the imperative for educator preparation programs to address the challenges posed by rapid technological advancement, with TK emerging as the area of lowest mean. It is, therefore, essential for these programs to design professional development activities aimed at equipping preservice teachers with the necessary skills of emerging technologies to navigate and effectively integrate emerging technologies into their teaching practice. Simultaneously, there is a pressing need to prioritize enhancing pedagogical knowledge, particularly concerning innovative approaches to teaching elementary mathematics. This involves fostering creativity and adaptability in instructional strategies, ensuring that preservice teachers are equipped to engage students effectively in mathematical concepts.

Furthermore, a holistic approach to teacher training should include opportunities for preservice teachers to develop their TPACK. Integrating emerging technologies into elementary mathematics lesson planning and classroom instruction not only enhances instructional delivery but also cultivates a deeper understanding of how technology can augment pedagogical practices in elementary mathematics. In essence, by addressing these key areas — TK, PK, and TPACK, educator preparation programs can better prepare preservice teachers to meet the demands of modern education and foster meaningful learning experiences for their future preservice teachers.


This study validated TPACK measures specific to elementary mathematics in a preservice teacher education program. The results indicated a stable theory-driven, four-factorsolution. Further, the relationships among TPACK with the three main knowledge domains (TK, PK, and CK) were evaluated with SEM, and the result revealed that TK and PK significantly impacted TPACK. At the same time, CK did not significantly predict TPACK. The strongest predictor for the preservice teachers was PK.

The results of this study imply that teacher education programs should connect elementary mathematics CK and TPACK by adopting concrete strategies that help preservice teachers see the links between elementary mathematics content and meaningful pedagogical and technological uses. For example, following the results from Trainin et al. (2018), modeling appropriate technology tools in preservice elementary mathematics content coursework will help preservice teachers develop technology integration in elementary mathematics content and experiment with them in their classes, field experience, practicum, and student teaching (Abebe et al., 2022; Agyei & Voogt, 2015; Chai et al., 2010).

The findings highlight the importance of policy decisions supporting technology integration into elementary mathematics classrooms. Policymakers should consider allocating resources and establishing guidelines that encourage the development of preservice teachers’ TPACK in elementary mathematics instruction. By creating an environment conducive to technology integration, policymakers can positively impact students’ engagement, motivation, and learning outcomes in mathematics.

This study adds to our understanding of TPACK as it relates to mathematics instruction. This relatively understudied domain generates somewhat conflicting results that highlight the need for further investigation of specific subgroups (e.g., elementary vs. secondary).  This study strengthened the TPACK literature, specifically adding to the limited literature related to elementary mathematics technology integration. As this study focused on quantitative results, it provides large-scale insight into preservice elementary teacher mathematics TPACK that will be useful for teacher educators and researchers. It also helps teacher education programs to look at the bigger picture in teaching and measuring TPACK during preservice teacher education.

While this study contributes valuable insights into elementary preservice teachers’ TPACK in mathematics instruction, it is essential to acknowledge its limitations. The research was conducted within a specific context, and the generalizability of the findings to other educational settings and subject areas should be approached with caution. Future research should continue to explore TPACK in different teacher education contexts with different participants to gain a more comprehensive understanding of effective technology integration across diverse teaching domains. In addition, the rapid rate of technological change limits the generalizability of results across time.

Implications for Educator Preparation Programs

Teacher preparation programs should measure TPACK continuously to track growth in preservice and change in program effectiveness.  Improving preservice teachers’ abilities in using technology for effective mathematics teaching is critical because of the challenges that elementary preservice teachers face, such as fast-changing technological advancement and changes in students’ skills in using new technologies. Educator preparation programs should recognize that TK and PK are critical for the effective use of TPACK. Fostering TK and using a strong technology integration model will lead to more effective teaching and better student learning outcomes in mathematics.

Future Research and Recommendations

Future studies should explore interventions for teaching preservice teachers integrating specific tools in elementary mathematics content (e.g., geometry) aligned with pedagogical approaches to see to what extent preservice teachers improve their TPACK and motivation to use technology in teaching elementary mathematics. Researchers should replicate this study with elementary school teachers currently in-service, which could lead to improvements in professional development regarding elementary mathematics TPACK.

Given the evolving nature of technology and education, ongoing preservice teacher TPACK research should aim to further explore and reimagine the integration of emerging technologies (such as artificial intelligence, augmented reality/virtual reality, and adaptive learning systems) in elementary mathematics content and their implications for TPACK (Mishra et al., 2023). Future research should examine the role of contextual factors (such as school culture, resources, and support systems) in influencing preservice teachers’ elementary mathematics TPACK development and implementation.


Abebe, F. F., Gaskill, M., Hansen, T., & Liu, X. (2022). Investigating k-12 pre-service teacher TPACK in instructional technology learning. International Journal of Teacher Education and Professional Development, 5(1), 1-16. doi: 10.4018/IJTEPD.2022010104

Agyei, D. D., & Voogt, J. M. (2015). Preservice teachers’ TPACK competencies for spreadsheet integration: insights from a mathematics-specific instructional technology course. Technology, Pedagogy and Education, 24(5), 605–625. https://doi.org/10.1080/1475939X.2015.1096822

Archambault, L. M., & Barnett, J. H. (2010). Revisiting technological pedagogical content knowledge: Exploring the TPACK framework. Computers & Education, 55(4), 1656–1662. https://doi.org/10.1016/j.compedu.2010.07.009

Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246.

Borthwick, A. C., & Hansen, R. (2017). Digital literacy in teacher education: Are teacher educators competent? Journal of Digital Learning in Teacher Education, 33(2), 46-48. https://doi.org/10.1080/21532974.2017.1291249

Brown, T. A. (2015). Confirmatory factor analysis for applied study. Guilford publications.

Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 136–162). Sage. https://doi.org/10.1177/0049124192021002

Bulut, A., & Işıksal, M. (2019). Perceptions of preservice elementary mathematics teachers on their technological pedagogical content knowledge (TPACK) regarding geometry. Journal of Computers in Mathematics and Science Teaching, 38(2), 153-176. https://www.learntechlib.org/primary/p/173761/

Celik, I., Sahin, I., & Akturk, A. O. (2014). Analysis of the relations among the components of technological pedagogical and content knowledge (TPACK): A structural equation model. Journal of educational computing study51(1), 1-22. https://doi.org/10.2190/EC.51.1.a

Chai, C. S., Koh, J. H. L., & Tsai, C. C. (2010). Facilitating preservice teachers’ development of technological, pedagogical, and content knowledge (TPACK). Journal of Educational Technology & Society13(4), 63-73. https://www.jstor.org/stable/10.2307/jeductechsoci.13.4.63

Chirichella, C. (2022, May 10). AACTE Awards to honor educator preparation
programs, scholars.
American Association of Colleges for Teacher Education. https://aacte.org/2016/02/aacte-awards-to-honor-educator-

Clark, L. A., & Watson, D. (1995). Constructing validity: basic issues in objective scale development. Psychological Assessment, 7(3), 309–319. https://doi.org/10.1037/14805-012

Clausen, J. M. (2020). Leadership for technology infusion: Guiding change and
sustaining progress in teacher preparation. In A. Borthwick, T. Foulger, & K. Graziano (Eds.), Championing technology infusion in teacher preparation: A framework for supporting future educators (pp. 171-189). International Society for Technology in Education.

Dziak, J. J., Lanza, S. T., & Tan, X. (2014). Effect size, statistical power, and sample size requirements for the bootstrap likelihood ratio test in latent class analysis. Structural Equation Modeling, 21(4), 534-552.

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science43(1), 115-135. doi: 10.1007/s11747-014-0403-8

Hill, J. E., & Uribe-Florez, L. (2020). Understanding secondary school teachers TPACK and technology implementation in mathematics classrooms. International Journal of Technology in Education, 3(1), 1-13.

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55. https://doi.org/10.1080/10705519909540118

International Society for Technology in Education. (2023). EPPS for digital equity and transformation. https://iste.org/pledge-for-digital-equity-transformation

Jang, S. J., & Tsai, M.-F. (2012). Exploring the TPACK of Taiwanese elementary mathematics and science teachers with respect to use of interactive whiteboards. Computers & Education, 59(2), 327–338. https://doi.org/10.1016/j.compedu.2012.02.003

Kaplon-Schilis, A., & Lyublinskaya, I. (2020). Analysis of relationship between five domains of TPACK framework: TK, PK, CK Math, CK science, and TPACK of preservice special education teachers. Technology, Knowledge and Learning25(1), 25-43.

Keith, T. (2015). Multiple regression and beyond: An introduction to multiple regression and structural equation modeling. Routledge.

Kline, R. B. (2016). Principles and practice of structural equation modeling. The Guilford Press.

 Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60-70.  https://citejournal.org/volume-9/issue-1-09/general/what-is-technological-pedagogicalcontent-knowledge

 Koehler, M. J., Shin, T. S., & Mishra, P. (2012). How do we measure TPACK? Let me count the ways. In R. N. Ronau, C. R. Rakes, & M. L. Niess (Eds.), Educational technology, teacher knowledge, and classroom impact: A research handbook on frameworks and approaches (pp. 16-31). IGI Global. doi: 10.4018/978-1-60960-750-0.ch002

Lee, H., & Hollebrands, K. (2008). Preparing to teach mathematics with technology: An integrated approach to developing technological pedagogical content knowledge. Contemporary Issues in Technology and Teacher Education8(4), 326-341. https://citejournal.org/volume-8/issue-4-08/mathematics/preparing-to-teach-mathematics-with-technology-an-integrated-approach-to-developing-technological-pedagogical-content-knowledge

Marbán, J. M., & Sintema, E. J. (2021). Preservice teachers’ TPACK and attitudes toward integration of ICT in mathematics teaching. International Journal for Technology in Mathematics Education, 28(1), 37-46. doi: 10.1564/tme_v28.4.03

Malik, S., Rohendi, D., & Widiaty, I. (2019, February). Technological pedagogical content knowledge (TPACK) with information and communication technology (ICT) integration: A literature review. In S. Malik, D. Rohendi, & I. Widiaty (Eds.), Proceedings of the 5th UPI International Conference on Technical and Vocational Education and Training (ICTVET 2018) (pp. 498-503). Atlantis Press. doi: 10.2991/ictvet-18.2019.114

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235-251. https://doi.org/10.1080/21532974.2023.2247480

Musgrove, A., Powers, J., Nichols, B. H., & Lapp, S. (2021). Exploring the role of elementary teachers’ TPACK in the adoption of 1:1 computing across subject areas. International Journal of Technology in Teaching and Learning, 17(1), 1-17.

Muthén, L. K., & Muthén, B. O. (2007). Mplus user’s guide (6th ed.). Muthén & Muthén.

National Center for Education Statistics. (2023). Characteristics of public school teachers. https://nces.ed.gov/programs/coe/indicator/clr/public-school-teachers

Niess, M.L., van Zee, E.H., & Gillow-Wiles, H. (2010). Knowledge growth in teaching mathematics/science with spreadsheets. Journal of Digital Learning in Teacher Education, 27(2), 42-52. doi: 10.1080/21532974.2010.10784657

Polly, D. (2014). Elementary school teachers’ use of technology during mathematics teaching. Computers in the Schools, 31(4), 271-292. doi: 10.1080/07380569.2014.969079

Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK). Journal of Study on Technology in Education, 42(2), 123-149. https://doi.org/10.1080/15391523.2009.10782544

Schmidt, A.,Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK). Journal of Study on Technology in Education, 42(2), 123-149, doi: 10.1080/15391523.2009.10782544

Scherer, R., Tondeur, J., & Siddiq, F. (2017). On the quest for validity: Testing the factor structure and measurement invariance of the technology-dimensions in the technological, pedagogical, and content knowledge (TPACK) model. Computers & Education, 112, 1-17. doi:10.1016/j.compedu.2017.04.012

 Scherer, R., Tondeur, J., Siddiq, F., & Baran, E. (2018). The importance of attitudes toward technology for preservice teachers’ technological, pedagogical, and content knowledge: Comparing structural equation modeling approaches. Computers in Human Behavior, 80, 67–80. https://doi.org/10.1016/j.chb.2017.11.003

Smith, R. C., Kim, S., & McIntyre, L. (2016). Relationships between prospective middle grades mathematics teachers’ beliefs and TPACK. Canadian Journal of Science, Mathematics and Technology Education16(4), 359-373.

 Tabachnick, B. G., & Fidell, L. S. (2019). Using multivariate statistics (7th ed.). Pearson.

Teigen, K. H., & Jørgensen, M. (2005). When 90% confidence intervals are 50% certain: On the credibility of credible intervals. Applied Cognitive Psychology, 19(4), 455-475. https://doi.org/10.1002/acp.1085

Thomas, A., Peterson, D., & Abebe, F. (2019). Adopting TETCs in integrated elementary mathematics and technology coursework: A collaborative self-study of two teacher educators. Journal of Technology and Teacher Education, 27(4), 499-525. https://www.learntechlib.org/primary/p/208222/

Thomas, A., & Trainin, G. (2019). Creating laboratories of practice for developing preservice elementary teachers’ TPACK: A programmatic approach. In M. L. Niess, H. Gillow-Wiles, & C, Angeli (Eds.), Handbook of research on TPACK in the digital aAge (pp. 155-172). IGI Global. doi: 10.4018/978-1-5225-7001-1

Trainin, G., Friedrich, L., & Deng, Q. (2018). The impact of a teacher education program re-design on technology integration in elementary preservice teachers: A five year multi-cohort study. Contemporary Issues in Technology and Teacher Education, 18(4), 692-721. https://citejournal.org/volume-18/issue-4-18/general/the-impact-of-a-teacher-education-program-redesign-on-technology-integration-in-elementary-preservice-teachers

Urbina, A., & Polly, D. (2017). Examining elementary school teachers’ integration of technology and enactment of TPACK in mathematics. The International Journal of Information and Learning Technology, 34(5), 439-451. https://doi.org/10.1108/IJILT-06-2017-0054

Voithofer, R., & Nelson, M. J. (2021). Teacher educator technology integration preparation practices around TPACK in the United States. Journal of Teacher Education, 72(3), 314-328. https://doi.org/10.1177/0022487120949842

Voogt, J., Fisser, P., Pareja Roblin, N., Tondeur, J., & van Braak, J. (2013). Technological pedagogical content knowledge–a review of the literature. Journal of Computer Assisted Learning, 29(2), 109-121. https://doi.org/10.1111/j.1365-2729.2012.00487.x

Wang, W., Schmidt-Crawford, D., & Jin, Y. (2018). Preservice teachers’ TPACK development: A review of literature. Journal of Digital Learning in Teacher Education, 34(4), 234-258. https://doi.org/10.1080/21532974.2018.1498039

Zelkowski, J., Gleason, J., Cox, D. C., & Bismarck, S. (2013). Developing and validating a reliable TPACK instrument for secondary mathematics preservice teachers. Journal of Study on Technology in Education, 46(2), 173–206. https://doi.org/10.1080/15391523.2013.10782618

Zou, D., Huang, X., Kohnke, L., Chen, X., Cheng, G., & Xie, H. (2022). A bibliometric analysis of the trends and research topics of empirical research on TPACK. Education and Information Technologies, 27(8), 10585-10609.

Survey Instrument

All survey items unrelated to mathematics were removed from the original Schmidt, D. A. et al. (2009) survey. The survey used a 5‐point Likert scale: 1. Strongly disagree, 2. Disagree, 3. Neither agree nor disagree, 4. Agree, and 5. Strongly agree.

 Technological Knowledge (TK)

  1. TK1: I know how to solve my technical problems.
  2. TK2: I can learn technology easily.
  3. TK3: I keep up with important new technologies.
  4. TK4: I frequently play around with technology.
  5. TK5: I know about a lot of different technologies.
  6. TK6: I have the technical skills I need to use technology.

Content Knowledge (CK)

  1. CK1: I have sufficient knowledge of mathematics.
  2. CK2: I can use a mathematical way of thinking.
  3. CK3: I have various ways and strategies for developing my understanding of mathematics.

Pedagogical Knowledge (PK)

  1. PK1: I know how to assess student performance in a classroom.
  2. PK2: I can adapt my teaching based on what students currently understand or do not understand.
  3. PK3: I can adapt my teaching style to different learners.
  4. PK4: I can assess student learning in multiple ways.
  5. PK5: I can use a wide range of teaching approaches in a classroom setting.
  6. PK6: I am familiar with common student understandings and misconceptions (PK6).
  7. PK7: I know how to organize and maintain classroom management (PK7).

Technological Pedagogical Content Knowledge (TPACK)

  1. TP1: I can select technologies to use in my classroom that enhance what I teach, how I teach, and what students learn.
  2. TP2: I can use strategies that combine content, technologies, and teaching approaches that I learned about in my coursework in my classroom.
  3. TP3: I can provide leadership in helping others coordinate the use of content, technologies, and teaching approaches at my school and/or district.
  4. TP4: I can choose technologies that enhance the content of a lesson.
  5. TP5: I can teach lessons that appropriately combine mathematics, technologies, and teaching approaches.