Anderson, E., & Calandra, B. (2026). Mixed-reality simulations in teacher education: A scoping review. Contemporary Issues in Technology and Teacher Education, 26(2). https://citejournal.org//proofing/mixed-reality-simulations-in-teacher-education-a-scoping-review

Mixed-Reality Simulations in Teacher Education: A Scoping Review

by Erin Anderson, Georgia State University; & Brendan Calandra, Georgia State University

Abstract

TeachLivE/Mursion mixed-reality simulations (MRS) are the most popular simulation platforms currently used in teacher preparation programs, and the field is rapidly moving toward automating these simulations using AI. However, this rapid series of developments leaves many questions regarding privacy and equity unanswered. The current paper presents a scoping literature review analyzing how TeachLivE and Mursion MRS teacher training designers address equity and criticality in these simulations, highlighting critical design principles to inform future MRS implementations.

Prior research indicates that technologies are inherent with biases and assumptions that shape the environment in which they are introduced (Heath & Moore, 2024). This includes educational technology, a field that has historically overlooked the systemic power and bias that can be embedded in technologies (Heath et al., 2024). TeachLivE and Mursion mixed-reality simulations (MRS) are two of the most prominent simulation educational technologies used in university teacher preparation programs (Ireland, 2021). However, educators currently do not know how designers of TeachLivE and Mursion are situating the design and implementation of MRS learning experiences amidst power, equity, and sociocultural forces.

Developing an MRS learning experience goes beyond creating the virtual scenario in which participants develop a targeted pedagogy. It involves selecting pre/post measures to evaluate pedagogical growth, employing instructional design principles (e.g., backward design, universal design for learning, etc.), training the human interactor that performs various avatar children, and even developing reflective tools to help contextualize participant performance (Anderson & Calandra, 2025). There are many points in the development process where implicit bias or unchecked assumptions can become embedded in the design. Therefore, this article presents a scoping review utilizing a systematic search (Arksey & O’Malley, 2005) that examines how TeachLivE/Mursion learning experience designers attend to issues of equity and criticality to situate these learning experiences within larger intersectional forces.

Findings show that while the use of MRS for teacher training is pedagogically beneficial, the field should move beyond mere MRS use to critically design MRS learning experiences that broaden access, challenge inequities, and address unintended harms. This review adds two design principles to the literature: specifically, broadening diversity and cultivating criticality in both designers and end users.

Background

Mursion and TeachLivE mixed-reality simulations (MRS) are the most popular simulation platforms in university teacher preparation programs (Ireland, 2021). During an MRS simulation, preservice teachers practice teaching virtual avatar students through the Zoom videoconferencing platform or while standing in front of a large computer display. One hidden human interactor virtually animates and voices the virtual avatars of various genders, ages, ethnicities, and neurodivergence. Preservice teachers and teacher trainers can pause the simulation at any time to ask questions and provide immediate feedback, thereby facilitating the rapid mastery of pedagogical skills. Ten minutes of practice with MRS equals 45-60 minutes of real-world practice (Dieker, Rodriguez et al.). Research has shown that preservice teachers prefer practicing with MRS over role-play (Dittrich et al., 2022; McKown et al., 2022) or using actors on Zoom (Randolph et al., 2024).

Researchers at the University of Central Florida developed TeachLivE to enable preservice and in-service teachers to practice pedagogy virtually (Dieker et al., 2008). This development occurred during a time of increasing political pressure for stronger accountability in teacher preparation programs (Dieker et al., 2023). Thus, the TeachLivE technology, developed with a “social justice lens,” was designed to help PSTs enhance their skills before school placements (Dieker et al., 2023, p. 2). The technology was commercialized, resulting in Mursion, to create training for various industries.

When the COVID-19 pandemic struck and schools closed, causing teacher preparation programs to scramble to adapt teacher practicums, the American Association of Colleges for Teacher Education hosted weekly webinars on mixed-reality simulations (Bondie et al., 2021). Since then, these MRS trainings have expanded globally, being used to train teachers in Pakistan (Cao, 2022), Taiwan (Liaw & Wu, 2021), and South Africa (Gravett et al., 2023).

Rationale

Two significant barriers hinder the widespread adoption of TeachLivE/Mursion MRS technology: (a) the scheduling challenges and cost of working with a human interactor, and (b) the time-consuming process of providing feedback on teacher performance. Enter generative artificial intelligence (GenAI). With GenAI, researchers are moving closer to overcoming these obstacles. Concerning the first issue, the expensive interactor, researchers are already training teachers using LLM-infused virtual avatars (Hwang et al., 2024). For teacher feedback, researchers with the Educational Testing Services (ETS), who develop the Praxis teacher certification exam, have created MRS+LLMs to assess teacher talk (Ilagan et al., 2024). The researchers built their work upon earlier ETS research that established the National Observational Teaching Examination (NOTE)assessment, the first significant effort to employ Mursion MRS for high-stakes scoring purposes (Mikeska et al., 2019). In other words, ETS researchers are gradually moving toward using MRS + AI to evaluate teachers.

The creators of these MRS trainings acknowledged that the future of MRS relies on GenAI (Dieker et al., 2023). While the current state of TeachLivE/Mursion MRS utilizes Artificial Narrow Intelligence (ANI) to automate avatar response features (e.g., talking during think-pair-share or canned laughter), the creators envision great potential in leveraging sensor data (e.g., visual, auditory, and physiological) and GenAI to deliver more comprehensive feedback on teacher performance, considering the possibility of examining teachers’ emotional states with MRI and GenAI. However, the creators also underscored the necessity of placing the human element at the center of these advancements, arguing that teachers’ perspectives should steer this evolution to ensure the technology is trustworthy and to reduce bias.

The propensity for these simulations to perpetuate bias is what Baker and Hawn (2022) called a known bias, as it has been mentioned in the MRS literature (Bondie et al., 2021; McGarr, 2021), warned about by the creators (Dieker et al., 2023), and established by research (Bondurant & Reinholz, 2023). Although Bondurant and Reinholz did not specifically identify the MRS simulation platform featured in their study, they found that the MRS perpetuated racist and sexist stereotypes while preservice teachers provided reduced-quality instruction to student avatars with negative stereotypes. The researchers noted the possibility of causing actual harm to preservice teachers who must interact with caricatures of their identity, writing, “Imagine being forced to participate in a virtual learning environment that portrays students whose racial/gender identities mirror yours as rude, abrasive, disruptive, and sexist” (p. 3). Once researchers start integrating GenAI into the MRS learning experience, these problems are compounded (Dieker et al., 2023).

Without proper guardrails, TeachLivE/Mursion researchers risk inadvertently replicating many of the inequitable harms associated with artificial intelligence. Baker and Hawn (2022) reviewed literature on algorithmic bias in educational contexts where model performance favored one group over others and found numerous examples, ranging from at-risk predictions for students dropping out of their studies (Anderson et al., 2019) and classes (Hu & Rangwala, 2020; Lee & Kizilcec, 2020) to essay evaluations (Bridgeman et al., 2009, 2012), language proficiency (Wang et al., 2018), and detection of student emotions (Ocumpaugh et al., 2014).

Baker and Hawn (2022) described the research community’s current understanding of all the harms caused by algorithmic bias as “sparse” and “insufficient,” emphasizing the need for further research to explore the unknown impacts of algorithmic bias on education (p. 32). The authors also noted that problems arise beyond the narrow focus on algorithmic models, as bias can be introduced before the design process even starts, affecting task definition and data collection, and extending through to task implementation and the ongoing effects on users.

For MRS learning experiences, this indicates that bias can emerge as soon as designers initiate the design process, because, as Macgilchrist et al. (2024) explained, designing technology is not just about problem-solving but also about creating spaces for political and sociotechnical future relations that can redefine the future. Thus, this review examines how these political and sociotechnical futures are addressed by examining MRS design through the lens of equity and criticality. Doing so could help the field better grapple with who exactly is designing these costly MRS learning experiences and who is privileged to experience them.

The goal is to illuminate the assumptions driving the work, the teaching practices being developed, and the ways designers attend to issues of power and accountability, including the ways teachers’ voices shape the work. Therefore, the following research question was posed: How do TeachLivE and Mursion MRS learning experience designers address issues of equity and criticality that might influence these learning experiences?

Key Concepts

Equity

Equity in education can have a variety of definitions, ranging from equal distribution of outcomes across different groups of people to equal levels of growth among students (Levinson et al., 2022). For this study, equity was operationalized by employing the framework proposed by Lee et al. (2023), who identified equity intersecting with MRS in six distinct domains, including explicit access to MRS (i.e., the cost, program variability, and disenfranchisement), the design of the MRS (i.e., the authenticity of representations, scenarios, students, cultural contextualization, and relevance), the affective aspects of MRS learning experiences (i.e., biases, beliefs, identity, and positionality), the targeted teaching practices (i.e., high expectations for all learners, learners as experts, culturally responsive and culturally sustaining curricula curricula, early conceptions/prior knowledge, and equitable participation), the types of assessments used (i.e., equitable assessments, asset-based feedback, blinded grading, and formative/summative assessments, justifying assessments), and the nature of critical conversations enacted (i.e., stakeholder meetings, critical race theory, socio-emotional learning, book banning, bullying, othering, tracking, and social justice).

Criticality

Criticality was operationalized using the common design mistakes of the product/human-centered design framework, which includes status quo design, misrepresentation of users, uneven power dynamics, technosolutionism, and accountability deficit (Intentional Futures, 2018). Status quo design mistakes occur when designers design for the majority, take a one-size-fits-all approach, or design for the average user, often ignoring qualitative data, resulting in a product that best suits “white, cisgender, able-bodied, neurotypical, heterosexual, med-high income students” (p. 11). This also includes situations where designers fail to acknowledge the need for differentiation in the learning experience.

Misrepresenting users occurs when homogenous design teams with unchecked personal biases and power differentials inadvertently envision incorrect or stereotypical end users (Intentional Futures, 2018). Design teams can avoid this mistake by including positionality statements and identifying the diversity of the design team. The mistake of not acknowledging uneven power dynamics arises when designers fail to account for the power they hold in the design process.

Even when minoritized students participate in the process, it can result in an extractive rather than a mutually beneficial experience (Intentional Futures, 2018). The mistake of technosolutionism occurs when designers treat technology as a neutral panacea for complex social problems, ignoring technology’s tendency to codify inequity within the digital code. Additionally, while looking for this, we also extracted data on instances when designers recognized problems with the technology. Finally, accountability deficits occur when designers fail to recognize a developed solution’s intentional and unintentional outcomes, neglecting to address how a solution could perpetuate oppressive forces.

Methods

To answer the research question, we conducted a scoping review using a systematic search (Arksey & O’Malley, 2005), as further refined by Levac et al. (2010). Scoping reviews, as opposed to literature reviews, are more appropriate when addressing broad topics where many study designs are applicable, and there is no emphasis on evaluating the quality of included studies (Arksey & O’Malley, 2005).

Search Strategies

The review process consisted of five key stages (Arksey & O’Malley, 2005; Levac et al., 2010):

  1. Identifying the research question.
  2. Identifying relevant studies.
  3. Study selection.
  4. Charting the data.
  5. Collating, summarizing, and reporting the results.

A comprehensive and systematic search was conducted across the ProQuest, EBSCO, and Google Scholar databases on March 26, 2024. The search strategy included the two words: TeachLivE and Mursion.

While Bondie et al. (2021) acknowledged that conducting a literature search using the specific terms, “TeachLivE” and “Mursion,” resulted in a small number of articles meeting their criteria, this search turned up a large number, which is a testament to the expansion of MRS training since the pandemic. The first 200 search results from Google Scholar were included for review (Haddaway et al., 2015). This search yielded 887 articles from ProQuest, 741 from EBSCO, and 94 from Google Scholar, totaling 1,722 articles. The search results were then exported to Rayyan, an AI-powered systematic review management application (Ouzzani et al., 2016), to manage citations and screen studies.

The study selection process consisted of four stages. First, Rayyan ran the uploaded search results to detect duplicates with 90% similarity, and then the first author manually resolved the remaining duplicates, resulting in 1,084 articles for screening. Next, the titles and abstracts of the identified studies were screened while consulting a second reviewer to manage discrepancies with the inclusion/exclusion criteria. Then, the first author read through the full-text articles of potentially relevant studies to further screen against the following inclusion/exclusion criteria. Finally, a citation search of included studies led to the addition of eight articles, resulting in 82 articles for inclusion.

Table 1
Inclusion and Exclusion Criteria

Criteria
Inclusion Criteria
- Population: TeachLivE/Mursion research involved with pre-service/in-service teachers
- Study written in English
- Published in a peer-reviewed journal
Exclusion Criteria
- Non-peer-reviewed journal
- Dissertations, conference proceedings, books and book chapters, grey literature (e.g., university promotional and PRNewswire)
- Never mentions TeachLivE/Mursion
- TeachLivE/Mursion is only mentioned in citations
- TeachLivE/Mursion was only mentioned once without an elaborate description of the technology being used with teachers.
For example, McGarr (2021) is included, although he only mentions TeachLivE once, but the article provides many paragraphs describing the use of the technology. Turner et al. (2023) were excluded since they only mention Mursion while listing various technologies employed during emergency remote teaching (e.g., Padlet, Instagram, Mursion).
- MRS used to train populations in education other than teachers (e.g., Principals, Speech-language pathologists, children)

Data Extraction Method

Data from the included studies were extracted into a Google Sheet that operationalized equity and criticality (see Table 1 for inclusions/exclusion criteria). The extracted data were thematically coded using Braun and Clark’s (2006) method for thematic analysis, which involved (a) familiarizing oneself with the data, (b) generating initial codes by systematically working through the extracted data, (c) reorganizing the codes into potential themes, (d) reviewing and refining the codes, and (e) producing a coherent narrative that integrates the extracted themes. The first author served as the primary coder while the second author provided peer feedback on identified and revised codes and themes, offering alternative interpretations throughout the analytic process. Three stages of coding were engaged in before saturation emerged. Simple descriptive statistics were used to compile specific codes.

Results

This scoping review does not reflect the full extent of MRS training in teacher preparation programs. In fact, in 2021, one MRS platform provider acknowledged that they were being used to train over 20,000 teachers and projected to train over 40,000 by 2022 (Bondurant & Reinholz, 2023). Furthermore, broadening this review to include practitioners’ contributions brought many valuable voices into the analysis that had yet to publish formal research studies using these MRS platforms (e.g., Budin, 2024; Eriksson, 2022; Hartle & Kaczorowski, 2019). Finally, expanding the review to include literature reviews provided a richer understanding of the thinking surrounding design.

The articles included in this scoping review are organized by types of articles. Forty-four articles detailed research studies on developing high-leverage or evidence-based core practices in preservice teachers (i.e., 33 studies) and in-service teachers (i.e., eight studies). Nine articles documented the development of teacherly dispositions, such as the affective, physiological, or reflective components of teaching in preservice teachers (i.e., seven) and in-service teachers (i.e., two). Eight articles researched teachers’ perceptions of MRS, including three that compared teachers’ perceptions of MRS to simSchool (Dove et al., 2023) and peer rehearsal (Lee et al., 2021; Lee et al., 2024). Two articles documented the development of tools using MRS (i.e., automatic posture feedback application and teacher feedback application). Eleven articles were practitioner pieces or generic overviews of using MRS learning experiences in teacher preparation programs. Four articles were conceptual analyses. Seven were literature reviews focused on different aspects of MRS in teacher preparation. Additionally, a citation search of the included literature reviews was conducted and compared to the inclusion criteria, revealing a few articles that probably used TeachLivE/Mursion MRS but did not specifically identify TeachLivE or Mursion and, therefore, were not included.

The creators of TeachLivE noted that the technology underwent almost a decade of vetting before “widespread use, adoption, and research began to emerge” (Dieker et al., 2023). This is reflected in the included articles, with 28 published before 2020 and 54 studies published from 2020 onward. Thirty-seven articles focused on TeachLivE, while 38 focused on Mursion. The remaining pieces are literature reviews that cover both TeachLivE and Mursion. (See the appendix for a Summary of Included Studies.)

Findings

Equity Considerations

One way to examine how authors address equity is to explicitly search for the word “equity” or a derivative thereof (i.e.,“inequities”) within the included literature. Nine articles explicitly contained “equity” or one of its derivatives. The authors discussed equity concerning systemic educational inequities for culturally and linguistically diverse students (Eriksson, 2022; Finn et al., 2020; Wernick et al., 2021), participants’ interactions (Berg et al., 2023; Horn et al., 2023; Peterson-Ahmad et al., 2023), and the inequities within technology (Bondie et al., 2021; Dieker et al., 2023; Ledger & Fischetti, 2020). However, utilizing Lee et al. (2023) domains of MRS alignment with equity (i.e., access, design, affective domains, teaching practices, assessment, and critical conversations) provided a more comprehensive understanding of how designers and thinkers attend to equity.

Access. Authors thematically addressed the intersection of equity and MRS access by justifying the use of MRS to mitigate teacher attrition, provide teachers with equal opportunities in technology-rich environments, and tackle the disparities in universities’ ability to afford MRS. Many scholars justified the use of TeachLivE or Mursion MRS by linking these learning experiences to reducing teacher attrition (i.e., Aguilar & Kang, 2023; Budin, 2024; Cohen et al., 2020; Dittrich et al., 2022; Henry et al., 2022; Rosati-Peterson et al., 2021). They connected the problematic nature of new teachers abandoning education to their unpreparedness to manage classrooms, among other pedagogical skills, suggesting that practice in MRS might empower them more effectively.

Another way access and equity intersected was in the justification for using MRS to prepare everyone within teacher preparation programs to be digitally literate. Landon-Hays et al. (2020) noted the number of teacher educators who are digital nomads compared to the number of their preservice teachers/students who are digital native. Thus, MRS could help bridge this divide. Last, several authors addressed the equity implications of accessing costly MRS technology, noting that MRS might only be available to resource-rich institutions or those with small numbers of preservice teachers (e.g., Dove et al., 2023; Eriksson, 2022; Horn et al., 2023; Kelley & Wenzel, 2019; Peterson-Ahmad et al., 2023; Qualls et al., 2024; Vasquez et al., 2017). In their literature review of MRS in Special Education, Qualls et al. highlighted that the majority of the studies took place in America, in the Southeast, with predominantly White/Caucasian participants, noting that the Branch Alliance for Educator Diversity is working to bring MRS to universities serving culturally diverse students.

Design. Many authors have noted how MRS can prepare teachers to provide high-quality instruction to diverse students. Wernick et al. (2021) said,

Disparities faced by CLDS [culturally and linguistically diverse students] could be assuaged by EPP [education preparation programs] through enhancing teacher readiness to advocate for and meet the needs of diverse student populations. Being culturally sensitive and learning how to leverage CLDS strengths would empower teachers to provide a more just and equitable education. (p. 230)

Finn et al. (2020) found that participating in MRS made teachers more aware of diverse students’ cultural nuances and encouraged them to move beyond focusing on pedagogy to celebrating students’ differences. Ely et al. (2018) called for researchers to do even more work using MRS to prepare teachers for culturally and linguistically diverse students.

Affective Domains. Another way equity intersects with MRS in the literature highlights the bias in the MRS design and the potential perpetuation of stereotypes (e.g., Berg et al., 2023; Bondie et al., 2021; Dieker et al., 2023; Dittrich et al., 2022; Eriksson, 2022; Lindberg & Jönsson, 2023; McGarr, 2021). As noted previously, the creators of TeachLivE are well aware of potential bias embedded in MRS and have discussed the need for critical reflection since the technology’s early days. Participants are also conscious of this, as one MRS participant during the reflection process referred to the avatars as “five extreme stereotypes” (Dittrich et al., 2022).

Bondie et al. (2021) cautioned against the stereotype threat and urged increased design transparency. The researchers developed five design principles to help mitigate the risk of bias and enhance design transparency (i.e., describing the learning design, documenting the avatar/interactor learning, documenting the interactor training, time and distance, and situating the design practice amid sociocultural complexities). Eriksson (2022) discussed two additional steps to reduce biases (i.e., having designed scenarios reviewed by individuals from cultures represented in the scenario and conducting a content analysis of participants’ discussion board posts and reflections to check for bias or stereotypes). McGarr (2021), whose conceptual piece serves as a sober warning for future MRS designers, delineated numerous potential problems with teaching classroom management using MRS, whether or not critical reflection is involved. McGarr noted the problematic nature of creating avatars’ behavior from a typology of behaviors, which is an abstraction by its very nature.

Teaching Practices. A fourth area where equity intersects with MRS is found in a common justification for MRS that allows teachers to engage in practices they would not otherwise experience. Many authors referenced the issue of teachers having inequitable access to high-quality practicums as a justification for MRS usage (e.g., Accardo & Xin, 2017; Dieker et al., 2019; Ersozlu et al., 2021; Finn et al., 2020; Garland et al., 2012; Gravett et al., 2023; Hayes et al., 2013; Hudson et al., 2018; Judge et al., 2013; Kaufman & Ireland, 2016; Landon-Hays et al., 2020; Ledger & Fischetti, 2020; McGarr, 2021; Lindberg & Jönsson, 2023).

Teacher preparation programs struggle to secure placements for their educators that are consistently high quality and provide candidates with opportunities to practice their skills (Garland et al., 2012; Judge et al., 2013). Even when candidates are placed in their practicums, observing all requires of teacher trainers a significant amount of time and resources to (Dove et al., 2023). Providing corrective feedback to candidates by observers or the participating primary teacher can disrupt classrooms, as students would need to wait (Dieker et al., 2019). Furthermore, regardless of whether one believes it is better to place teachers in well-functioning schools versus struggling schools, practicum experience “often serves primarily to socialize student teachers into the existing status quo, potentially perpetuating traditional habits and norms, some of which may be undesirable” (Gravett et al., 2023).

Critical Conversations. Relatedly, some in the TeachLivE/Mursion literature justify using MRS to provide preservice teachers with the opportunity to practice conversing with critical stakeholders, an opportunity they would not have within their practicum. For example, Accardo and Xin (2017) noted that there is virtually no opportunity for teacher candidates in practicums to speak with parents of students, which could make them hesitant to communicate with parents once they enter their practice, and practicing communication skills within parent-teacher conferences is a popular simulation (i.e., Accardo & Xin, 2017; Budin, 2024; Dalinger et al., 2020; Henry et al., 2022; Kelley & Wenzel, 2019; Kilbourn & Piro, 2022; Scarparolo & Mayne, 2022). Not only has TeachLivE/Mursion MRS been used to practice conversations with key stakeholders, but it has also been utilized to practice discussing sensitive topics. For example, Hansson et al. (2023) used MRS to enable preservice teachers to discuss conspiracy theories with students.

Assessment. A final area where Lee et al. (2023) identified equity intersecting with MRS is in the assessment component. Should preservice teachers be graded on their performance in a simulator? Instead of assigning a punitive grade for performance in MRS, some researchers had participants self-assess their performance. For example, Driver and Zimmer (2022) had participants identify personal goals using a developed rubric and then use it to self-assess their video-recorded performance while generating peer feedback. Grant and Ferguson (2021) evaluated the quality of participants’ MRS reflections on discussion board posts, rather than their MRS performance, noting that a 10-minute MRS interaction provided scant evaluative benefit, whereas assessing their “reflective professional discourse about learning and teaching” was more meaningful (p. 145).

Some participants did not assign a grade for MRS participation (Gundel et al., 2019). Still, there have been attempts to use these MRS learning experiences to grade teacher performance at scale. For example, the National Observational Teaching Examination (NOTE) assessment was the first significant effort to use Mursion MRS for high-stakes scoring purposes to evaluate teacher performance (Mikeska et al., 2019). In the spirit of the NOTE assessment, Mikeska et al. then drafted performance tasks to measure teachers’ “coordinated, accumulated, and dynamic (CAD) competencies” (p. 128). They argued that existing teacher tests suffer from construct underrepresentation in capturing the dynamic complexities of teaching. Furthermore, they noted the difficulty in standardizing traditional teacher observation since teacher behavior depends heavily on the students and environment, leading to construct-irrelevant variance.

Criticality Considerations

A critical view of educational technology looks beyond technology’s learning outcomes to examine “the social conflicts and politics that underpin the use of technology in educational settings” (Selwyn, 2010). To operationalize criticality, we used Intentional Futures’ (2018) five design mistakes of human-centered design (i.e., status quo design, misrepresenting users, uneven power dynamics, techno-solutionism, and accountability deficits) to seek evidence of these mistakes being addressed or mitigated in the TeachLivE/Mursion literature.

Status Quo Design. According to Intentional Futures (2018), status quo design mistakes occur when designers focus on the quantitative majority and attempt to design a single solution for all students or design for the “average” student, which ultimately benefits “white, cisgender, able-bodied, neurotypical, heterosexual, med-high income students at the expense of minoritized students” (p. 11).

To see if TeachLivE/Mursion MRS designers replicated this mistake, we first examined MRS participants. In the included research studies (n = 60), all researchers identified their participants by associating them with some aspect of education, which we broadly coded as “school,” since it took a variety of forms (e.g., participants’ education level, educational certification status, curriculum specialization, teaching level, classroom characteristics, amount of teaching experience, etc.). Other researchers captured participants’ race, gender, and/or age. Some studies involved participants in countries outside of the USA (i.e., Australia-five, South Africa-two, Sweden-one, Norway-one, Taiwan-one).

Who are these TeachLivE/Mursion MRS learning experiences designed for? The vast majority of studies that identified participants’ school, gender, race, and age included participants predominantly Caucasian, female, and between the ages of 20 and 40. However, there were exceptions. Aguilar and Flores (2022, 2024) worked with predominantly Hispanic-identifying participants, while Luke (2023) included predominantly people of color-identifying participants. Mikeska and Howell (2021), Mikeska and Lottero-Perdue (2022) and Mikeska et al. 2024) noted that although their participant demographic was skewed toward Caucasian and White, preventing their findings from generalizing to other populations, these demographics represented the teacher workforce (Banilower et al., 2018). None of the studies specified whether their participants had disabilities.

Misrepresentation of Users. This design mistake of misrepresenting users occurs when the design team lacks diversity or designers fail to consider their positionalities during the research process, potentially perpetuating stereotypes through the design (Intentional Futures, 2018). Even when diverse voices are present, designers can still misrepresent the end user if those voices lack the power to express their concerns.

To examine this common design mistake, consider who has written about their experience designing MRS learning experiences (e.g., research studies + practitioner pieces = 72 articles). Of this subset, 86% (n = 62) are authored by USA-based writers, followed by 8% (n = 6) by Australia-based authors, 3% (n = 2) by South African-based authors, and 1% (n = 1) by authors from Canada, Sweden, Taiwan, and Norway. Two articles featured researchers from different countries (i.e., DeSantis et al., 2023; Liaw & Wu, 2021). Thus, these MRS learning experiences predominantly occur in resource-rich countries in North America, Australia, and Europe, a concern raised earlier when researchers discussed the equity of MRS access.

One way to mitigate the misrepresentation-of-users error is for designers to check their positionalities for implicit bias. A positionality statement, or reflexivity statement, is when researchers explicitly identify themselves within their research (Savolainen et al., 2023). Out of the 82 included articles, only one contained an explicit positionality statement, titled “Author Subjectivity”(Keese et al., 2023). The authors noted that the participants were all colleagues of the authors; thus, objectivity was impossible, and therefore, included a positionality statement to increase the trustworthiness of the study.

Uneven Power Dynamics. The mistake of uneven power dynamics occurs when designers do not acknowledge the power they possess within the design process to ideate solutions for end users. Even when diverse voices and end users enter the design process, unchecked power imbalances can lead to extractive, rather than beneficial relationships (Intentional Futures, 2018). Within the TeachLivE/Mursion literature, some authors generally discussed power as a form of authority, recognizing the imbalance between teachers and technology in education. Some researchers acknowledged educators’ power in eliciting student thinking (Lee et al., 2024) and using teacher talk to control the classroom (Gillespie Rouse et al., 2023). Others reflected on their power as both instructors and researchers. They wondered if this affected participants’ evaluations of their MRS performance (Hudson et al., 2018), colored their written perceptions of the MRS experience (Lottero-Perdue et al., 2022), or influenced participants’ desire to be included in the research (Grant & Ferguson, 2021).

Beyond the teachers, authors noted the power imbalance that arises from creating tools in a politicized world. Donehower Paul et al. (2020) stated that, while creating tools to evaluate teachers can be helpful, it is often embedded in a one-size-fits-all approach to teacher preparation, which risks privileging a single set of standards and tools at the expense of appreciating the plethora of tools and forms of knowledge. Politicians promoting agendas cloaked in educational reform and neoliberal market influences, which prioritize saleable, scalable tools to measure behavior, exacerbate this problem. Furthermore, since Mursion is a commercial product, unlike TeachLivE, Dalinger et al. (2020) proposed making it a separate research strand from TeachLivE.

Technosolutionism. The mistake of technosolutionism occurs when designers uncritically believe that technology alone can best fix complex social problems, often resulting in solutions that are merely Band-Aids for more significant systemic issues (Intentional Futures, 2018). Technosolutionism is “rooted in the idea that tech is the ‘great equalizer’ and is inherently ‘neutral,’ which ignores how technology often digitizes inequities and scales them to an alarming degree” (p. 11). Evidence of authors praising TeachLivE/Mursion and justifying its use was the most common code found in the literature. This makes sense, as most researchers utilizing these technologies must explain and justify their application in their studies. However, McGarr (2021) stated, “There remains a high level of positivity in relation to the benefits of this type of technology into the future — largely reflecting the broader positivity surrounding educational technology” (p. 279).

While justifying the use of this technology was expected, many authors also noted the problems with TeachLivE/Mursion MRS. One of the most common problems cited in the literature was participants’ inability to suspend their disbelief or the perception that the avatar responses felt fake (e.g., Dalinger et al., 2020; Dittrich et al., 2022; Ferguson & Sutphin, 2022; Fischetti et al., 2022; Kamhi-Stein et al., 2020; Ledger & Fischetti, 2020; Mikeska & Howell, 2021).

Interestingly, in one study, participants felt the avatars were fake because their language and behavior were too preprogrammed (Judge et al., 2013), which does not necessarily reflect the capabilities of human-in-the-loop technology. Some authors noted that their participants struggled with the technology, such as when an interactor’s broken foot pedestal made the avatars jumpy (Garland et al., 2012) or when low internet connectivity caused issues with the Zoom application (Luke et al., 2023). Others bemoaned the technology’s limitations, such as only seeing participants from the shoulders up (Anton et al., 2023), avatars being unable to move out of their chairs, or teachers being unable to use the common classroom management technique of proximity (Hansson et al., 2023; Lindberg & Jönsson, 2023), the inability to see student work (Dove et al., 2023; Grant & Ferguson, 2021), the inability to use electronic slideshows or visual reinforcements (Ledger & Fischetti, 2020), the inability to use class attention-grabbing signals such as clapping (Ferguson & Sutphin, 2022; Hudson et al., 2018), or the inability to rearrange students into small discussion groups, a common practice in language-learning classrooms (Kamhi-Stein et al., 2020).

Some participants struggled with their desire to understand how the technology operated, which sometimes resulted in this struggle dominating their reflections (Kamhi-Stein et al., 2020). However, when participants learned about human-in-the-loop technology, they also struggled to suspend their disbelief (Ledger & Fischetti, 2020). Other authors wrestled with the decontextualized nature of these MRS learning experiences (e.g., Dove et al., 2023; Driggers, 2023; McGarr, 2021; Mikeska et al., 2019).

Participating in MRS can make people nervous (e.g., Ferguson & Sutphin, 2022; Garland et al., 2012; Henry et al., 2022). As mentioned earlier, some authors noted the scheduling headaches, problems with short sessions, and the cost of the technology. Although authors justified using MRS to observe teachers during their practicums when teacher trainers otherwise could not, some still found this MRS feedback process burdensome (Dove et al., 2023). Finally, some participants stated that this type of teaching simply was not their preferred mode of instruction (Luke et al., 2023).

Accountability Deficits. The final design mistake is creating accountability deficits, which occur when designers fail to consider the “intentional and unintentional outcomes a solution may pose” that can often exacerbate systemic inequities, particularly for marginalized populations (Intentional Futures, 2018, p. 11). Authors of TeachLivE/Mursion MRS articles frequently alluded to these deficits in the limitations sections of their papers. A central theme that emerged from these limitations involved issues with generalizing results. Some researchers noted that their work had limited generalizability because the study took place in a single setting (e.g., Accardo & Xin, 2017; Dittrich et al., 2022; Keese et al., 2023; Landon-Hays et al., 2020). Others highlighted problems related to their small sample size (e.g., Anton et al., 2023; Budin, 2024; Dove et al., 2023; Hansson et al., 2023; Kilbourn & Piro, 2022; Luke et al., 2023) or the diversity of their samples (e.g., Dalinger et al., 2020; Ely et al., 2018; Larson et al., 2020; Mikeska & Lottero‐Perdue, 2022).

Some indicated that their sample was skewed due to participants residing closer to the lab (Mikeska et al., 2019) or because their participants were early adopters of technology (Donehower Paul et al., 2020; Keese et al., 2023). Others suggested that extenuating variables might have influenced generalization, such as participants’ experiences in their teacher preparation program (Dawson & Lignugaris/Kraft, 2017).

Many researchers called for improved measures to monitor generalizability, such as identifying and replicating time and distance variables (Bondie et al., 2021) and more specifically, across at least three different MRS sessions (Ledger & Fischetti, 2020), moving beyond reliance on teacher self-reports (McGarr, 2021), conducting longitudinal studies (Peterson-Ahmad, 2018), using Moore’s (2009) Evaluation Framework (Qualls et al., 2024), and implementing randomized controlled trial designs (Larson et al., 2020).

Although small sample sizes limit generalizability, one could utilize a more diverse small sample (Judge et al., 2013). Due to the burdensome nature of individualized feedback, Desantis et al. (2023) recommended 15 participants as the ideal sample size for a single research team. McGarr (2021) noted that many of these studies were “pilot-type studies typified by short periods of engagement with simulations,” which demonstrates the “peripheral nature” of these technologies in teacher preparation programs, still in its “pioneering phase” (p.279). Mikeska et al. (2019) also pointed out that one cannot claim that skills developed in MRS generalize to real classrooms, due to the lack of empirical evidence.

Another way authors addressed accountability deficits was by contextualizing the decontextualized nature of these simulation experiences. For example, Driggers (2023) noted the irony in TeachLivE promoting the technology as context-free when, in reality, “no such thing exists” and argued that context should be the starting point when developing teacherly dispositions (p. 2). Liaw and Wu (2021) employed Darvin and Norton’s (2015) Model of Investment to discuss this contextualization and its impact on teacherly dispositions, noting that identity, ideology, and capital all influence one’s disposition and that this shifting process is subject to the dominant ideologies of certain groups.

Bondie et al. (2021) also cautioned against practicing limited behaviors without considering these behaviors’ cultural relevance or recognizing how students’ varying abilities could negatively affect teachers’ perceptions of their own teaching ability. McGarr (2021) highlighted the danger of focusing on the decontextualized actions of teachers, stating that doing so contributes to the “performative, technical-rationalist perspective” of teaching, where good teaching is viewed as efficient classroom performance and where reflection ignores larger systemic forces (p. 282). McGarr recommended that teachers critically reflect not only on their actions and student behavior but also on the MRS by asking questions such as the following:

  • What assumptions about behavior are embedded within the simulations?
  • What does this reveal about how pupil behavior is viewed and categorized?
  • From the perspective of gender, social class, and ethnicity, how well are students served by these categorizations? (p. 282).

Donehower Paul et al. (2020) also emphasized the need for this criticality, as these MRS tools —and the ones that the future brings — exist amid shifting neoliberal political forces. They call for teachers to lead the development of these tools to “prevent a CEO-driven political schema taking over the collective knowledge of the field” (p. 22). In response to the questions in the previous paragaph, Donehower Paul et al. (2020) added the following:

  • What are the current external and internal forces that we want to resist and, conversely, what is it that we want to become?
  • What do I have to do to achieve that vision? (p. 22)

Discussion

This literature review began by suggesting that TeachLivE and Mursion designers might be failing to situate these MRS learning experiences within larger intersectional forces. However, it identified many examples of authors addressing equity and criticality to better inform future MRS designers.

Although these TeachLivE/Mursion MRS learning experiences have rapidly expanded since the pandemic, as evidenced by more articles in this review published after 2020 than before, we agree with McGarr (2021) that these MRS learning experiences are still on the periphery of teacher preparation programs. A better, holistic understanding of the design process, which begins by embedding these experiences within teacher preparation programs and continues after participants enter their practice, could provide more teacher educators with opportunities to leverage these experiences to train preservice and in-service teachers.

As the literature review shows, this process predominantly occurs in North American, Australian, and European institutions with Caucasian female participants aged 20-40. While these MRS learning experiences have developed various pedagogies and dispositions, which is a testament to the technology’s broad applicability, we also agree with McGarr’s (2021) assertion that the field needs to move beyond the “pioneering” phase, focused on people’s perception of the technology, to explore more dimensions of teacher education.

Finally, we concur with Bondie et al. (2021), who emphasized the need to document the learning design. Researchers should also report the type of MRS used, the theory of change, the instructional design models, the reflection activities, and the participant/design team demographics. To Bondie et al. (2021), five design principles (i.e., describing the learning design, documenting the avatar/interactor learning, documenting the interactor training, time and distance, and situating the design practice amid sociocultural complexities), we add broadening diversity and cultivating criticality. The following presents different ways these two additional design principles can manifest in designing MRS learning experiences.

Broadening Diversity and Cultivating Criticality

The MRS teacher training field urgently needs to broaden diversity, from the interactors playing the avatars, the participants accessing the MRS training, and the universities facilitating these experiences.

Broadening Diversity

There are many ways that TeachLivE and Mursion can help designers mitigate bias during the backend portion of development. One way is to have employees engage in professional development on bias mitigation and the criteria used to assess it and then publish this training for end users to reference. Another method to reduce the perpetuation of racial caricatures is to utilize racially ambiguous avatars, although Bondurant and Reinholz (2023) found this approach to be insufficient. Artificial intelligence in education researchers Baker and Hawn (2022) identified several statistical fairness measures to evaluate algorithmic bias in AIED, including criteria of independence, separation, and sufficiency. This is especially important as MRS integrates with GenAI. However, Baker and Hawn recognized that the AIED field still lags behind other disciplines in identifying and measuring algorithmic bias and called for developing education-specific packages to audit bias and for creating reference datasets for testing new approaches.

Another way teacher training platforms could help mitigate bias is by diversifying the employees who perform the avatars. The problem of White actors portraying avatars of varying ethnicities is not unique to MRS. The game design world also needs assistance in diversifying the pool of actors to develop avatars (Miletic, 2020). However, Miletic noted that industries’ hesitancy to address digital blackface exacerbates the challenges that Black, indigenous, and people of color actors face in retaining work while perpetuating the false notion of the superiority and universality of White actors’ vocal abilities. As these MRS experiences become automated, perhaps people who share similar demographics with the developed avatars could perform and develop the avatars.

On the front end of the MRS learning experience design process, another way to guide the automation of these MRS learning experiences is to broaden the diversity of participants and universities facilitating these experiences. Designers leveraged MRS to bring together participants in different university pathways (i.e., Peterson-Ahmad et al., 2023) and countries (i.e., DeSantis et al., 2023; Liaw & Wu, 2021). Resource-rich universities could use their funding to partner with other institutions, enabling their teachers to collaborate across cultural and geographic boundaries. Baker and Hawn (2022), who also called for broadening participation in AI audits, suggested creating structures to incentivize openness (e.g., creating journal special issues and publication opportunities, encouraging funding for research, etc.). Similar incentives could encourage MRS design teams to seek diverse collaborations.

Cultivating Criticality

Using more critical theories to guide the design and infusing criticality throughout the design process, from ideation to implementation, could help mitigate bias in the MRS learning experience. The founders of TeachLivE have also been calling for critical reflection to reduce bias in the MRS design for over a decade (Dieker et al., 2014; Dieker et al., 2023). While critical reflection is integral to the MRS learning experience, bias can infiltrate the design process before it even begins (Baker & Hawn, 2022). McGarr (2021) also emphasized this, urging that critical reflection should extend to the simulation experience.

However, some might hesitate to use this technology due to the potential perpetuation of stereotypes as one interactor portrays various avatars. Nevertheless, all technology has historically been designed by a privileged group, embedding their biases and blind spots. Even when technology developers take deliberate steps to mitigate bias, they can still get it painstakingly wrong (e.g., Google’s rollout of the Gemini image generator). Therefore, in addition to auditing MRS, broadening diversity, and using critical reflection, teacher trainers could also cultivate technoskeptical dispositions to help everyone involved in the design process continually check for issues related to themselves, the design process, the participants, and the participants’ generalization into their practice. A technoskeptical disposition, as defined by Heath and Moore (2024), is as follows:

an orientation towards technology which invites a consideration of the collateral and unintended effects of technology on individuals and society. Drawing from media ecology and critical theory, it is anchored in an assumption that technologies are not neutral, and neither are the societies into which technologies are introduced. (p. 11)

To cultivate a technoskeptical disposition, Heath and Moore (2024) recommended conducting technology audits and embedding ethics into technology design. The researchers noted that technology audits would involve everyone in the design process — from the designer to the end user — who would ask critical questions to examine the intersection of technology and ethical design and its positive and negative impacts on society and education. They also recommended embedding ethics throughout the design by identifying shared ethics-oriented philosophies to guide the process, inserting ethical consideration language into developed problem statements, charting the solution’s potential benefits and harms, involving stakeholder input throughout the design process, and centering critical questions during the design process (see Table 2).

Table 2
Technoskeptical Questions

Heath & Moore (2024)
Technoskeptical Questions
Questions for MRS DesignersImpact on the Design
Was this technology designed ethically, and is it used ethically?What can we do to ensure we ethically designed this MRS learning experience?- Use justice-centered design methods.
- Use participatory design.
- Use Universal Design for Learning.
- Document all components of the design process.
Are laws that apply to our use of this technology just?What laws and regulations on AI-enhanced educational technology products apply to MRS learning experiences? - Adhere to the best AI data privacy laws.
- Add clauses in IRB to protect participants’ data for future automation.
Does this technology afford or constrain democracy and justice for all people and groups?How does our design afford or constrain democracy and justice for all people and groups?- Use people from represented avatar demographics to develop, perform, and audit avatar actions.
- Develop critical consciousness in teachers to identify and address systemic inequities in education.
Are the ways the developers profit from this technology ethical?Are the ways the developers profit from this technology ethical?- Attend to best practices in AI avatar development to compensate participants
- Compensate people whose likeness was used to develop avatars.
- Bring end users and community members into the design process and compensate them through equitable incentives.
What are unintended and unobvious problems to which this technology might contribute? (Postman, 1997)What are unintended and unobvious problems to which this technology might contribute? (Postman, 1997)- Center this question in each stage of the design process.
- Include this question in participants' scaffolded reflections.
In what ways does this technology afford and constrain learning opportunities about technologies?In what ways does our designed MRS learning experience afford and constrain learning opportunities about technologies?- Turn design sessions into tech literacy sessions.
- Include McGarr's (2021) and Donehower Paul's (2020) critical questions (i.e., What assumptions about behavior are embedded within the simulations? What does this reveal about how pupil behavior is viewed and categorized? From the perspective of gender, social class, and ethnicity how well are students served by these categorizations? What are the current external and internal forces that we want to resist and, conversely, what is it that we want to become? What do I have to do to achieve that vision?)
Note: These technoskeptical questions were quoted from Heath, M. K., & Moore, S. (2024). Locating TPACK XK between theory and practice: Reflective practice, applied ethics, and technoskeptical dispositions. Computers and Education Open, 100204.

By integrating technoskepticism into the design process to cultivate technoskeptical dispositions in designers and end users, criticality extends beyond mere reflection and becomes embedded throughout the design.

Conclusion

This scoping review shows that TeachLivE and Mursion MRS-supported teacher training are gaining popularity after a decade of vetting. These MRS learning experiences provide authentic approximations of practice for preservice and in-service teachers, although the extent to which teachers’ practice is generalizable remains a topic for further research. A more thoughtful and inclusive design of these MRS learning experiences could help mitigate the bias embedded within them. This scoping review is significant because it extends the work of Bondie et al. (2021) to expand the definition of MRS design and organizes how TeachLivE and Mursion teacher trainers have addressed equity and criticality in the literature. As a result, we propose adding broadening diversity and cultivating criticality to the design principles uncovered in this review. Infusing criticality into an equitable design process with technoskepticism could help ensure these MRS teacher trainings serve diverse participants while preparing them to fight inequities in their practice.

MRS teacher training is not a panacea for the inequality that exists in education. However, MRS designers can avoid exacerbating these harms by designing and implementing tools, strategies, and approaches to educational technology design that include the voices of students, teachers, and the community. If the future of education is to include automated, simulated teacher assessments, now is the time to vigorously interrogate and iterate on these MRS learning experiences to ensure that this direction is, after all, the right one.

References

Accardo, A., & Xin, J. (2017). Using technology-based simulations to promote teacher candidate parental collaboration and reflective instructional decision making. Journal of Technology and Teacher Education, 25(4), 475–494.

Aguilar, J. J., & Flores, Y. (2022). Analyzing the effectiveness of using mixed-reality simulations to develop elementary pre-service teacher’s high-leverage practices in a mathematics methods course. EURASIA Journal of Mathematics, Science and Technology Education, 18(5). https://doi.org/10.29333/ejmste/12006

Aguilar, J. J., & Kang, S. (2023). Innovating with in-service mathematics teachers’ professional development: The intersection among mixed-reality simulations, approximation-of-practice, and technology-acceptance. International Electronic Journal of Mathematics Education, 18(4), em0750.

Anderson, H., Boodhwani, A., & Baker, R. S. (2019). Assessing the fairness of graduation predictions. In Proceedings of the 12th International Conference on Educational Data Mining (pp. 488–491). International Educational Data Mining Society.

Anderson, E., & Calandra, B. (2025). Equitable opportunities to respond: A mixed-reality simulation design case. International Journal of Designs for Learning16(1), 94–107. https://doi.org/10.14434/ijdl.v16i1.37067

Anton, S., Piro, J. S., Delcourt, M. A., & Gundel, E. (2023). Pre-service teachers’ coping and anxiety within mixed-reality simulations. Social Sciences, 12(3), 146. https://doi.org/10.3390/socsci12030146

Arksey, H., & O’Malley, L. (2005). Scoping studies: towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19–32. https://doi.org/10.1080/1364557032000119616

Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32, 1052–1092. https://doi.org/10.1007/s40593-021-00285-9

Banilower, E. R., Smith, P. S., Malzahn, K. A., Plumley, C. L., Gordon, E. M., & Hayes, M. L. (2018). Report of the 2018 NSSME+. Horizon Research, Inc.

Benedict, T. J. (2022). The computer got it wrong: Facial recognition technology and establishing probable cause to arrest. Washington and Lee Law Review, 79, 849.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. John Wiley & Sons.

Berg, C., Dieker, L., & Scolavino, R. (2023). Using a virtual avatar teaching simulation and an evidence-based teacher observation tool: A synergistic combination for teacher preparation. Education Sciences, 13(7), 744. https://doi.org/10.3390/educsci13070744

Bondie, R., Mancenido, Z., & Dede, C. (2021). Interaction principles for digital puppeteering to promote teacher learning. Journal of Research on Technology in Education, 53(1), 107–123. https://doi.org/10.1080/15391523.2020.1823284

Bondie, R., & Dede, C. (2020). Redefining and transforming field experiences in teacher preparation through personalized mixed reality simulations. In R. E. Ferdig & K. E. Pytash (Eds.), What teacher educators should have learned from 2020 (pp. 229-242). Association for the Advancement of Computing in Education.

Bondurant, L., & Reinholz, D. (2023). “Rahul is a math nerd” and “Mia can be a drama queen”: How mixed-reality simulations can perpetuate racist and sexist stereotypes. Mathematics Teacher Educator, 11(3), 189–209. https://doi.org/10.5951/MTE.2021-0041

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.

Bridgeman, B., Trapani, C., & Attali, Y. (2009). Considering fairness and validity in evaluating automated scoring [Paper
presentation]. Annual meeting of the National Council on Measurement in Education, San Diego, CA, United States.

Bridgeman, B., Trapani, C., & Attali, Y. (2012). Comparison of human and machine scoring of essays: differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27–40. https://doi.org/10.1080/08957347.2012.635502

Budin, S. (2024). Three approaches to using mixed reality simulations for teacher preparation and recruitment of future teachers. Education Sciences, 14(1), 75. https://doi.org/10.3390/educsci14010075

Cao, Y. (2022). Developing teachers’ contingent responsiveness in dialogic science teaching via mixed-reality simulations: a design-based study (Doctoral dissertation, University of Cambridge). https://doi.org/10.17863/CAM.90530

Chen, C. Y. (2020). New approach of teacher training: The virtual reality simulation. Journal of Education Research, 313, 19–34. doi:10.3966/168063602020050313002

Cohen, J., Wong, V., Krishnamachari, A., & Berlin, R. (2020). Teacher coaching in a simulated environment. Educational Evaluation and Policy Analysis, 42(2), 208–231. https://doi.org/10.3102/0162373720906

Dalinger, T., Thomas, K. B., Stansberry, S., & Xiu, Y. (2020). A mixed reality simulation offers strategic practice for pre-service teachers. Computers & Education, 144, 103696.

Darvin, R., & Norton, B. (2015). Identity and a model of investment in applied linguistics. Annual Review of Applied Linguistics, 35, 36–56. doi: 10.1017/S0267190514000191

Dawson, M. R., & Lignugaris/Kraft, B. (2017). Meaningful practice: Generalizing foundation teaching skills from TLE TeachLivE™ to the classroom. Teacher Education and Special Education, 40(1), 26–50. https://doi.org/10.1177/0888406416664184

DeSantis, W. J., Delcourt, M. A., Shore, B. M., & Greenwood, J. C. (2023). Impact of data-driven feedback and coaching on preservice teachers’ questioning skills for higher-order thinking within a mixed-reality simulation environment. Education Sciences, 13(6), 596. https://doi.org/10.3390/educsci13060596

Dieker, L., Hughes, C., & Hynes, M. (2023). The past, the present, and the future of the evolution of mixed reality in teacher education. Education Sciences, 13(11), 1070. https://doi.org/10.3390/educsci13111070

Dieker, L. A., Hughes, C. E., Hynes, M. C., & Straub, C. (2017). Using simulated virtual environments to improve teacher performance. School-University Partnerships, 10(3), 62–81.

Dieker, L. A., Hynes, M. C., Hughes, C. E., Hardin, S., & Becht, K. (2015). TLE TeachLivE™: Using technology to provide quality professional development in rural schools. Rural Special Education Quarterly, 34(3), 11–16. https://doi.org/10.1177/875687051503400303

Dieker, L., Hynes, M., Hughes, C., & Smith, E. (2008). Implications of mixed reality and simulation technologies on special education and teacher preparation. Focus on Exceptional Children, 40(6), 1.

Dieker, L. A., Rodriguez, J. A., Lignugaris/Kraft, B., Hynes, M. C., & Hughes, C. E. (2014). The potential of simulated environments in teacher education: Current and Future possibilities. Teacher Education and Special Education, 37(1), 21–33. https://doi.org/10.1177/0888406413512683

Dieker, L. A., Straub, C. L., Hughes, C. E., Hynes, M. C., & Hardin, S. (2014). Learning from virtual students. Educational Leadership, 71(8), 54–58. https://www.learntechlib.org/p/153594/

Dieker, L. A., Straub, C., Hynes, M., Hughes, C. E., Bukathy, C., Bousfield, T., & Mrstik, S. (2019). Using virtual rehearsal in a simulator to impact the performance of science teachers. International Journal of Gaming and Computer-Mediated Simulations, 11(4), 1–20. https://doi.org/10.4018/IJGCMS.2019100101

Dittrich, L., Aagaard, T., & Hjukse, H. (2022). The perceived affordances of simulation-based learning: online student teachers’ perspectives. International Journal of Educational Technology in Higher Education, 19(1), 60. https://doi.org/10.1186/s41239-022-00366-2

Donehower Paul, C., Bukaty, C. A., & Dieker, L. (2020). Teacher professional learning using simulation: A delphi study. Teacher Development, 24(1), 21–32. https://doi.org/10.1080/13664530.2019.1694574

Dove, A., Borland, J., Wiley, C. R., Moylan, A., Thacker, A., & Dunleavy, M. (2023). The potential of simulation assessments in professional development. Journal of Educational Technology Systems, 51(3), 340–371. https://doi.org/10.1177/00472395221138789

Driggers, K. (2023). Virtual training, virtual teachers: On capacities and being-at-work. Studies in Philosophy and Education, 42(6), 585-597. https://doi.org/10.1007/s11217-023-09898-0

Driver, M., Zimmer, K., & Murphy, K. (2018). Using mixed reality simulations to prepare preservice special educators for collaboration in inclusive settings. Journal of Technology and Teacher Education, 26(1), 57–77.

Driver, M. K., & Zimmer, K. (2022). A guide to integrating mixed-reality simulation in initial and advanced special education programs. Journal of Special Education Preparation, 2(1), 48–57. https://doi.org/10.33043/JOSEP.2.1.48-57

Ely, E., Alves, K. D., Dolenc, N. R., Sebolt, S., & Walton, E. A. (2018). Classroom simulation to prepare teachers to use evidence-based comprehension practices. Journal of Digital Learning in Teacher Education, 34(2), 71–87. https://doi.org/10.1080/21532974.2017.1399487

Eriksson, G. (2022). Pretense or belief: Creating meaningful scenarios and simulations for authentic learning about diverse underserved gifted students. Education Sciences, 12(8), 532. https://doi.org/10.3390/educsci12080532

Ersozlu, Z., Ledger, S., Ersozlu, A., Mayne, F., & Wildy, H. (2021). Mixed-reality learning environments in teacher education: An analysis of TeachLivE™ research. Sage Open, 11(3), 1–10. https://doi.org/10.1177/21582440211032155

Ferguson, S., & Sutphin, L. (2022). Analyzing the impact on teacher preparedness as a result of using Mursion as a risk-free microteaching experience for pre-service teachers. Journal of Educational Technology Systems, 50(4), 432–447. https://doi.org/10.1177/00472395211067731

Finn, M., Phillipson, S., & Goff, W. (2020). Reflecting on diversity through a simulated practicum classroom: A case of international students. Journal of International Students, 10, 71–85.

Fischetti, J., Ledger, S., Lynch, D., & Donnelly, D. (2022). Practice before practicum: Simulation in initial teacher education. The Teacher Educator, 57(2), 155–174. https://doi.org/10.1080/08878730.2021.1973167

Garland, K. V., Vasquez III, E., & Pearl, C. (2012). Efficacy of individualized clinical coaching in a virtual reality classroom for increasing teachers’ fidelity of implementation of discrete trial teaching. Education and Training in Autism and Developmental Disabilities, 47(4), 502-515. https://doi.org/10.1177/215416471204700411

Gillespie Rouse, A., Young, M. K., & Gifford, D. (2023). Exploring relationships between pre-service teachers’ self-efficacy for writing and instruction provided in simulated elementary writing conferences. Frontiers in Psychology, 14, 1214086. https://doi.org/10.3389/fpsyg.2023.1214086

Grant, M., & Ferguson, S. (2021). Virtual microteaching, simulation technology & curricula: a recipe for improving prospective elementary mathematics teachers’ confidence and preparedness. Journal of Technology and Teacher Education, 29(2), 137–164. https://doi.org/10.70725/800773mdpaeo

Gravett, S., Van der Merwe, D., Ramsaroop, S., Tshabalala, P., Bremner, C., & Mello, P. (2023). Mixed-reality simulation to support practice learning of preservice teachers. Education Sciences, 13(10), 1062. https://doi.org/10.3390/educsci13101062

Gundel, E., Piro, J. S., Straub, C., & Smith, K. (2019). Self-efficacy in mixed reality simulations: Implications for preservice teacher education. The Teacher Educator, 54(3), 244-269. https://doi.org/10.1080/08878730.2019.1591560

Haddaway, N. R., Collins, A. M., Coughlin, D., & Kirk, S. (2015). The role of Google Scholar in evidence reviews and its applicability to grey literature searching. PloS One, 10(9), 1–17. https://doi.org/10.1371/journal.pone.0138237

Hansson, P. O., Samuelsson, M., & Höög, M. L. (2023). Teaching avatars on controversial issues: Lessons learned. IAFOR Journal of Education, 11(2), 61-78. https://doi.org/10.22492/ije.11.2.03

Hartle, L., & Kaczorowski, T. (2019). The positive aspects of Mursion when teaching higher education students. Quarterly Review of Distance Education, 20(4), 71-100. https://doi.org/10.1108/QRDE-07-2020-0005

Hayes, A. T., Straub, C. L., Dieker, L. A., Hughes, C. E., & Hynes, M. C. (2013). Ludic learning: Exploration of TLE TeachLivE™ and effective teacher training. International Journal of Gaming and Computer-Mediated Simulations, 5(2), 20–33. https://doi.org/10.4018/jgcms.2013040102

Heath, M. K., Gleason, B., Mehta, R., & Hall, T. (2024). More than knowing: Toward collective, critical, and ecological approaches in educational technology research. Educational Technology Research and Development, 72, 2519-2541. https://doi.org/10.1007/s11423-023-10242-z

Heath, M. K., & Moore, S. (2024). Locating TPACK XK between theory and practice: Reflective practice, applied ethics, and technoskeptical dispositions. Computers and Education Open, 7, 100204. https://doi.org/10.1016/j.caeo.2024.100204

Henry, J. J., Kindzierski, C., Budin, S. E., Tryjankowski, A. M., & Henry, A. R. (2022). Preparing teacher candidates for successful communication with diverse families using simulations. Teacher Educators’ Journal, 15(1), 46-76.

Horn, A. L., Rock, M. L., Chezan, L. C., Bobzien, J. L., Karadimou, O., & Alturki, A. (2023). Effects of e coaching on the occurrence, equity, and variety of behavior specific praise during Mursion™ simulations. Journal of Special Education Technology, 38(4), 501-514. https://doi.org/10.1177/01626434231152893

Hudson, M. E., Voytecki, K. S., & Zhang, G. (2018). Mixed-reality teaching experiences improve preservice special education students. Journal for Virtual Worlds Research, 11(2). https://jvwr-ojs-utexas.tdl.org/jvwr/article/view/7308

Hu, Q., & Rangwala, H. (2020). Towards fair educational data mining: A case study on detecting at-risk students. In Proceedings of the 13th international conference on educational data mining (pp. 431–437).  International Educational Data Mining Society.

Hudson, M. E., Voytecki, K. S., Owens, T. L., & Zhang, G. (2019). Preservice teacher experiences implementing classroom management practices through mixed-reality simulations. Rural Special Education Quarterly, 38(2), 79–94. https://doi.org/10.1177/8756870519841421

Hwang, J., Hong, S., Eom, T., Lim, C. (2024) Enhancing pre-service teachers’ competence with a generative artificial intelligence-enhanced virtual reality simulation. In R. Lindgren, T. I. Asino, E. A. Kyza, C. K. Looi, D. T. Keifert, & E. Suárez (Eds.), Proceedings of the 18th International Conference of the Learning Sciences—ICLS 2024 (pp. 24–27). International Society of the Learning Sciences

Ilagan, M., Klebanov, B. B., & Mikeska, J. (2024). Automated evaluation of teacher encouragement of student-to-student interactions in a simulated classroom discussion. In Proceedings of the 19th workshop on Innovative Use of NLP for Building Educational Applications (pp. 182–198). Association for Computational
Linguistics.

Intentional Futures. (2020). An introduction to equity-centered design. Bill and Melinda Gates Foundation.

Ireland, A. L. (2021). Mixed reality simulation in teacher preparation programs in the United States [Doctoral dissertation, University of California, Los Angeles]. ProQuest Dissertations & Theses Global. https://escholarship.org/uc/item/2862n1vw

Judge, S., Bobzien, J., Maydosz, A., Gear, S., & Katsioloudis, P. (2013). The use of visual-based simulated environments in teacher preparation. Journal of Education and Training Studies, 1(1). https://doi.org/10.11114/jets.v1i1.41

Kamhi-Stein, L. D., Lao, R. S., & Issagholian, N. (2020). The future is now: Implementing mixed-reality learning environments as a tool for language teacher preparation. TESL-EJ, 24(3), n3.

Kaufman, D., & Ireland, A. (2016). Enhancing teacher education with simulations. TechTrends, 60, 260–267. https://doi.org/10.1007/s11528-016-0049-0

Keese, J., Ford, D. J., Luke, S. E., & Vaughn, S. M. (2023). An individualized professional development approach for training university faculty in using a technological tool. Education and Information Technologies, 28(11), 14577–14594. https://doi.org/10.1007/s10639-023-11792-8

Kelley, M. J., & Wenzel, T. (2019). How TeachLivE™ transformed our teaching practices in reading education and pre-service. SRATE Journal, 28(1), 9–22.

Kilbourn, E., & Piro, J. S. (2022). Developing teacher identity in the liminal space of simulations. International Journal of Teacher Education and Professional Development, 5(1), 1–21. https://doi.org/10.4018/IJTEPD.313938

Landon-Hays, M., Peterson-Ahmad, M. B., & Frazier, A. D. (2020). Learning to teach: How a simulated learning environment can connect theory to practice in general and special education educator preparation programs. Education Sciences, 10(7), 184. https://doi.org/10.3390/educsci10070184

Larson, K. E., Hirsch, S. E., McGraw, J. P., & Bradshaw, C. P. (2020). Preparing preservice teachers to manage behavior problems in the classroom: The feasibility and acceptability of using a mixed-reality simulator. Journal of Special Education Technology, 35(2), 63–75. https://doi.org/10.1177/0162643419836415

Ledger, S., & Fischetti, J. (2020). Micro-teaching 2.0: Technology as the classroom. Australasian Journal of Educational Technology, 36(1), 37-54. https://doi.org/10.14742/ajet.4561

Lee, H., & Kizilcec, R. F. (2020). Evaluation of fairness trade-offs in predicting student success. ArXiv E-Prints, arXiv:2007.00088.
https://arxiv.org/abs/2007.00088

Lee, C., Lee, T., Dickerson, D., Castles, R., & Vos, P. (2021). Comparison of peer-to-peer and virtual simulation rehearsals in eliciting student thinking through number talks. Contemporary Issues in Technology and Teacher Education, 21(2), 294–324. https://citejournal.org/volume-21/issue-2-21/mathematics/comparison-of-peer-to-peer-and-virtual-simulation-rehearsals-in-eliciting-student-thinking-through-number-talks

Lee, C.W., Bondurant, L., Sapkota, B., Howell, H., Lai., Y. (2023). Conceptualizing ethics, authenticity, and effifficacy of simulations in teacher education. In Proceedings of the 45th annual meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, 2 (pp. 602–610). University of Nevada, Reno.

Lee, T. D., Lee, C., Newton, M., Vos, P., Gallagher, J., Dickerson, D., & Regenthal, C. (2024). Peer to peer vs. virtual rehearsal simulation rehearsal contexts: Elementary teacher candidates’ scientific discourse skills explored. Journal of Science Teacher Education, 35(1), 63-84. https://doi.org/10.1080/1046560X.2023.2181505

Levac, D., Colquhoun, H., & O’brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5, 1–9. https://doi.org/10.1186/1748-5908-5-69

Levinson, M., Geron, T., & Brighouse, H. (2022). Conceptions of educational equity. AERA Open, 8, 23328584221121344. https://doi.org/10.1177/23328584221121344

Liaw, M. L., & Wu, S. (2021). Exploring L2 teacher identities in an intercultural telecollaborative mixed-reality teaching environment. CALICO Journal, 38(3). https://doi.org/10.1558/cj.20169

Lindberg, S., & Jönsson, A. (2023). Preservice teachers training with avatars: A systematic literature review of “human-in-the-loop” simulations in teacher education and special Education. Education Sciences, 13(8), 817. https://doi.org/10.3390/educsci13080817

Lottero-Perdue, P. S., Mikeska, J. N., & Nester, M. S. (2022). Using preservice teachers’ transcript coding of simulated argumentation discussions to characterize aspects of their noticing about argument construction and critique. Contemporary Issues in Technology and Teacher Education, 22(1), 105–139. https://citejournal.org/volume-22/issue-1-22/science/using-preservice-teachers-transcript-coding-of-simulated-argumentation-discussions-to-characterize-aspects-of-their-noticing-about-argument-construction-and-critique

Luke S.E., Ford, D. J., Michelle Vaughn, S., & Fulchini-Scruggs, A. (2023). An online field experience using mixed reality virtual simulation. Journal of Research on Technology in Education, 55(2), 324-343. https://doi.org/10.1080/15391523.2021.1962452

Macgilchrist, F., Allert, H., Cerratto Pargman, T., & Jarke, J. (2024). Designing postdigital futures: Which designs? Whose futures? Postdigital Science and Education, 6(1), 13–24. https://doi.org/10.1007/s42438-022-00389-y

Macgilchrist, F., Potter, J., & Williamson, B. (2024). Challenging the inequitable impacts of edtech. Learning, Media, and Technology, 49(2), 147–150. https://doi.org/10.1080/17439884.2024.2350117

McGarr, O. (2021). The use of virtual simulations in teacher education to develop pre-service teachers’ behavior and classroom management skills: implications for reflective practice. Journal of Education for Teaching, 47(2), 274–286. https://doi.org/10.1080/02607476.2020.1733398

McKown, G., Hirsch, S. E., Carlson, A., Allen, A. A., & Walters, S. (2022). Preservice special education teachers’ perceptions of mixed-reality simulation experiences. Journal of Digital Learning in Teacher Education, 38(1), 4–19. https://doi.org/10.1080/21532974.2021.1995796

Mikeska, J. N., Howell, H., & Straub, C. (2019). Using performance tasks within simulated environments to assess teachers’ ability to engage in coordinated, accumulated, and dynamic (CAD) competencies. International Journal of Testing, 19(2), 128–147. https://doi.org/10.1080/15305058.2018.1551223

Mikeska, J. N., & Howell, H. (2020). Simulations as practice‐based spaces to support elementary teachers in learning how to facilitate argumentation‐focused science discussions. Journal of Research in Science Teaching, 57(9), 1356–1399. https://doi.org/10.1002/tea.21659

Mikeska, J. N., & Howell, H. (2021). Authenticity perceptions in virtual environments. Information and Learning Sciences, 122(7/8), 480–502. https://doi.org/10.1108/ILS-10-2020-0234

Mikeska, J. N., & Lottero‐Perdue, P. S. (2022). How preservice and in‐service elementary teachers engage student avatars in scientific argumentation within a simulated classroom environment. Science Education, 106(4), 980–1009. https://doi.org/10.1002/sce.21726

Mikeska, J. N., Lottero-Perdue, P. S., & Kinsey, D. (2024). Using videos as a tool for self-reflection: The nature of in-service elementary teachers’ reflections on their ability to facilitate argumentation-focused discussions in a simulated classroom. Journal of Science Education and Technology, 33, 316-332. https://doi.org/10.1007/s10956-023-10085-6

Miletic, P. (2020). Avatar’n’Andy: The colour blind ideology in video game voice acting. The Journal of the Canadian Game Studies Association, 13(21), 34–54. https://doi.org/10.7202/1071450ar

Moore, D. E., Green, J. S., & Gallis, H. A. (2009). Achieving desired results and improved outcomes: Integrating planning and assessment throughout learning activities. Journal of Continuing Education in the Health Professions, 29(1), 1–15. https://doi.org/10.1002/chp.20001

Ocumpaugh, J., Baker, R., Gowda, S., Heffernan, N., & Heffernan, C. (2014). Population validity for educational data mining models: A case study in affect detection. British Journal of Educational Technology, 45(3), 487–501. https://doi.org/10.1111/bjet.12156

Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan—a web and mobile app for systematic reviews. Systematic Reviews, 5(1), 210. https://doi.org/10.1186/s13643-
016-0384-4

Peterson-Ahmad, M. (2018). Enhancing pre-service special educator preparation through combined use of virtual simulation and instructional coaching. Education Sciences, 8(1), 10. https://doi.org/10.3390/educsci8010010

Peterson-Ahmad, M. B., Pemberton, J., & Hovey, K. A. (2018). Virtual learning environments for teacher preparation. Kappa Delta Pi Record, 54(4), 165–169. https://doi.org/10.1080/00228958.2018.1515544

Peterson-Ahmad, M. B., Keeley, R., & Frazier, A. (2023). Using mixed reality to support inclusive teaching strategies in general and special education preparation programs. Social Sciences, 12(11), 596. https://doi.org/10.3390/socsci12110596

Qualls, L. W., Carlson, A., Scott, S. N., Cunningham, J. E., & Hirsch, S. E. (2024). Special education teachers’ preservice experiences with mixed-reality simulation: A systematic review. Teacher Education and Special Education, 47(2), 124-141. https://doi.org/10.1177/08884064231226255\

Randolph, K. M., Billingsley, G. M., & Thomas, C. N. (2024). Using behavior skills training and virtual simulations to train preservice practitioners in behavior management: An exploratory comparison study. The Journal of Special Education Apprenticeship, 13(1), 3. https://doi.org/10.58729/2167-3454.1195

Rosati-Peterson, G. L., Piro, J. S., Straub, C., & O’Callaghan, C. (2021). A nonverbal immediacy treatment with pre-service teachers using mixed reality simulations. Cogent Education, 8(1), 1882114. https://doi.org/10.1080/2331186X.2021.1882114

Savolainen, J., Casey, P. J., McBrayer, J. P., & Schwerdtle, P. N. (2023). Positionality and its problems: questioning the value of reflexivity statements in research. Perspectives on Psychological Science, 18(6), 1331–1338. https://doi.org/10.1177/17456916221144988

Scarparolo, G., & Mayne, F. (2022). Mixed-reality simulations as a tool to enhance parent-teacher conferencing in initial teacher education. Australasian Journal of Educational Technology, 38(5), 62–76. https://doi.org/10.14742/ajet.7327

Selwyn, N. (2010). Looking beyond learning: Notes towards the critical study of educational technology. Journal of Computer Assisted Learning, 26(1), 65-73. https://doi.org/10.1111/j.1365-2729.2009.00338.x

Vasquez III, E., Marino, M. T., Donehower, C., & Koch, A. (2017). Functional analysis in virtual environments. Rural Special Education Quarterly, 36(1), 17-24. https://doi.org/10.1177/8756870517703405

Wang, Z., Zechner, K., & Sun, Y. (2018). Monitoring the performance of human and automated scores for spoken responses. Language Testing, 35(1), 101–120. https://doi.org/10.1177/0265532216679451

Wernick, A. M., Conry, J. M., & Ware, P. D. (2021). Coaching in the time of coronavirus 2019: How simulations spark reflection. International Journal of Mentoring and Coaching in Education, 10(2), 216–233. https://doi.org/10.1108/IJMCE-01-2021-0007


Appendix
Summary of Included Articles

AuthorsSummary of TaskGeographic
Region of Research
Collected Participant Demographics
Research Studies Developing High-Leverage/Evidenced-Based Practices: 41
Preservice Teachers: 33
Accardo & Xin (2017). PSTs facilitate an effective parent-teacher conference, (2) present professional communication, and (3) make appropriate instructional decisions while discussing a child's 504 plan.USARace
Gender
Age
School
Aguilar & Flores (2022).PSTs use productive mathematical talk moves (PMTMs) during a clinical interviewUSARace
Age
Gender
School
Cohen et al. (2020). PSTs redirect off-task behaviors during a "norm-setting discussion"USARace
Gender
Age
School
DeSantis et al. (2023).PSTs use higher-order thinking (HOT) questioning skills while conducting a lesson related to their area of certificationUSA

CANADA
Gender
Age
School
(Study
participants in USA)
Driver et al. (2018). PSTs use communication skills when working with paraprofessionals, general education co-teacher, parent, and administratorUSARace
Gender
Age
School
Ely et al.(2018). PSTs conduct collaborative strategic readings.USARace (Uni)
Gender
Age
School
Ferguson & Sutphin (2022). PSTs create and implement an engaging 5E lesson plan.USASchool
Finn et al. (2020). PSTs (international) develop lesson plans attending to culture, differences, and diversityAUSTRALIASchool
Fischetti et al. (2022)PSTs developed confidence, planned for diverse learners, understood personalizing pedagogy, and engaged with classroom management while micro-teaching in MRS.AUSTRALIAAge
Gender
School
Garland et al. (2012). PSTs conduct Discrete Trial Teaching.USAAge
Gender
School
Gillespie Rouse et al.(2023).PSTs use instructional moves (describe, expand, affirm, manage, instruct) during writing conferencesUSARace
Gender
School
Grant & Ferguson (2021). PSTs lead student-centered discussions after problem-solvingUSASchool
Gravett et al. (2023).PSTs use questioning skills to elicit and expand upon students' prior knowledge during the lesson introduction phase.SOUTH
AFRICA
School
Gundel et al. (2019). PSTs engage in a variety of teaching tasks (e.g., discussion facilitation, use of a graphic organizer, running a formative assessment)USARace (Uni)
Gender (Uni)
School (Uni)
Henry et al.(2022).PST's confidence and preparedness for communicating with diverse familiesUSASchool
Horn et al. (2023).PSTs deliver behavior-specific praise after eCoachingUSARace
Gender
School
Hudson et al. (2018).PSTs manage a classroom when introducing a student, using a group alert strategy, and introducing a unit of study.USARace
Age
Gender
School
Hudson et al. (2019). PSTs manage a classroom when introducing a student, using a group alert strategy, and introducing a unit of study.USARace
Age
Gender
School
Judge et al.(2013). PSTs ability to use differential reinforcement of incompatible behavior.USAAge
Gender
School
Landon-Hays et al. (2020).PSTs engage in high-leverage practices (i.e., strategic teaching, collaboration, differentiation, providing feedback) after redesigning Special/general education coursesUSAAge
Gender
School
Larson et al. (2020). PSTs gap between their intended and actual ability to manage a classroomUSARace
Age
Gender
School
Ledger et al. (2019). PSTs preferred instructional strategies when micro-teaching.AUSTRALIASchool
Ledger & Fischetti (2020). PSTs engage in various teaching tasks/skills while micro-teaching.AUSTRALIASchool
Lottero-Perdue et al. (2022). PSTs noticing argument construction/critique after coding transcripts.USARace
Gender
School
Luke et al. (2023) PSTs gain students' attention, engage students during a lesson, and facilitate a discussion.USARace
Age
School
Mikeska & Howell (2020). PSTs enact scientific argumentationUSARace
Gender
School
Mikeska & Lottero‐Perdue (2022). PSTs versus ISTs enact scientific argumentation.USARace
Gender
School
Nel & Marais (2023). PSTs teach core reading skills (e.g., eliciting background information on informational text).SOUTH

AFRICA
School
Pankowski & Walker (2016). PSTs (traditional versus alternative certification) classroom management styles (e.g., control, care, self-regulation, other).USARace
Age
Gender
School
Peterson-Ahmad (2018).PSTs teach a lesson on self-management while providing opportunities to respondUSASchool
Peterson-Ahmad et al. (2023).PSTs (General and Special Ed) explain and model content, coordinate and adjust instruction, and check student understanding.USASchool
Scarparolo & Mayne (2022). PSTs communication skills while explaining differentiation during a parent-teacher meetingAUSTRALIAGender
School
Vince Garland et al. (2016).PSTs use the system of least prompts while conducting a reading comprehension lessonUSARace
Age
Gender
School
In-Service Teachers: 8
Aguilar & Kang (2023).ISTs use productive mathematical talk moves during a mathematical discussionUSARace
Age
Gender
School
Dawson & Lignugaris/Kraft (2017). ISTs deliver specific praise, praise around, and error correction while teaching a language arts vocabulary lessonUSARace
Age
Gender
School
Dieker et al. (2017).ISTs facilitate a whole-class discussion while eliciting and interpreting individual students’ thinkingUSARace
Age
Gender
School
Dieker et al. (2019). ISTs elicit and interpret individual students’ thinking (HLP #3)USARace
Age
Gender
School
Fraser et al. (2020).ISTs conduct Discrete Trial Teaching.USAAge
Gender
School
Pas et al. (2016). ISTs use evidence-based interventions with students with ASD to address behavior problems.USARace
Gender
School
Pas et al. (2019)ISTs to detect, prevent, and respond to bullying in the classroom.USARace
Gender
School
Vasquez et al. (2017). ISTs conduct functional analysis assessments.USAGender

School
Research Studies Developing Teacher Identity: 9
Preservice Teachers: 7
 
Anton et al. (2023). PSTs practice their anxiety coping skills while using a graphic organizer and building rapport with a parent while discussing the child's academic progressUSASchool
Ford et al. (2023).PSTs development of metacognitive awareness while leading a whole group discussion.USAAge
Gender
School
Hansson et al. (2023). PSTs general knowledge, pedagogical content knowledge, and content knowledge to teach controversial issues (i.e., conspiracy theories about the pandemic, vaccinations, fake news, and source criticism).SWEDENSchool
Kilbourn & Piro (2022). PSTs liminal growth of teachers identify during parent-teacher conferencesUSARace
Age
Gender
School
Liaw & Wu (2021). PSTs (2 Taiwanese/1 American) identity and how they aspired to become good teachers when practicing eliciting background knowledge.TAIWAN

USA
Gender
School
Piro & O’Callaghan (2019). PSTs teacher identities (preprofessional, liminal, and trending toward professional) while (1) managing a classroom, (2) using a graphic organizer, and (3) higher order questioning (9 simulations with various tasks)USASchool
Rosati-Peterson et al.(2021). PSTs improve nonverbal immediacy behaviorsUSARace
Gender
School
In-Service Teachers: 2
Mikeska et al.. (2024). ISTs self-reflections after reviewing video recordings of them engaging in scientific argumentation.USARace
Gender
School
Wernick et al.. (2021). MRS+coaching facilitates PST/ISTs reflectionUSARace
Gender
School
Research Studies on People’s Perception of MRS: 8
Dalinger et al.(2020).Past PSTs' perceptions of MRS as they led middle school students in discussion, taught middle school students about Cornell Notes, or led a parent-teacher conference with an adult avatar.USAGender
School
Dittrich et al. (2022). PSTs’ perception of affordances in using TeachLivE to teach lessons on pedagogy or mathematics didacticsNORWAYAge
Gender
School
Donehower Paul et al.(2020).Delphi methodology is used to validate a list of teachers' behaviors that can be developed with MRSUSASchool
Dove et al. (2023).Compares Teachlive and simSchool to help avatar students solve word problemsUSASchool
Keese et al.(2023). Professional faculty development to use Teachlive in their classesUSAGender
School
Lee et al.(2021). Compares Teachlive to peer rehearsal in developing PSTs eliciting strategies (i.e., launch, uptake, pushback, connection, literal, uptake-literal, repeat, provides information, think aloud).USASchool
Lee et al. (2024). Compares TeachlivE to peer rehearsal in developing PSTs teacher talk moves (i.e., revoicing, restating other students' ideas, applying own reasoning to others' reasoning, prompting students for further participation, asking students to explain their reasoning, using wait time)USARace
Gender
School
Mikeska & Howell (2021). PSTs’ perception of authenticity (i.e., task authenticity, student avatar authenticity, and performance authenticity) when facilitating a scientific argumentation discussionUSARace
Gender
School
Research Studies on Developing Tools With MRS: 2
Barmaki & Hughes (2018). Developed an automated posture feedback application while PSTs practiced their nonverbal communication while asking the virtual students about technologyUSAGender
School
Berg et al. (2023). Developed a teacher observation web-based app called SeeMeTeach that provides a platform for evidence-based teacher observations both within the simulator and in real classroom settingsUSASchool
Practitioner Pieces and Generic Descriptions of MRS Use: 11
Budin, S. (2024).A generic case study describes how TeachLivE was used to develop PSTs' behavior observation skills when conducting functional behavior assessments, parent-teacher communication skills, and as a recruitment tool.USA
Dieker et al. (2014). A generic case study of different applications of TeachLivE by a university to develop a variety of PST skills (e.g., ability to do DDT, gain student attention, provide OTR, develop questioning strategies, and use teacher proximity, praise, and extinction to manage the student avatar’s problem behaviors.)USA
Dieker et al.(2015). Generic case study of different applications of TeachLivE for rural educationUSA
Driver & Zimmer (2022). Practitioners guide to integrating MRs with Special Education courses.USA
Eriksson, G. (2022). Practitioners guide to using MRS for gifted studentsUSA
Hartle & Kaczorowski (2019). Personal reflections of 2 Professors using Mursion MRSUSA
Kamhi-Stein et al. (2020). Generic case study of using Mursion to help PSTs manage the classroom/give error feedback, and teach similes/metaphors.USA
Kelley & Wenzel (2019). Personal reflections on how two professors used TeachLivE and created assessments for parent-teacher reading conferencesUSA
Mikeska et al. (2019).Researchers developed coordinated, accumulated, and dynamic (CAD) performance tasks to assess teachers.USA
O’Brien, S. (2023). Case report overview of three strategies (Mursion, Social Media, Online ZOOM COP) to mitigate covid crisisAUSTRALIA
Peterson-Ahmad et alSch. (2018). Practitioners guide explaining MRS and providing suggestions on using MRSUSA
Conceptual Pieces on MRS for Teacher Training: 4
Driggers, K. (2023). Critical examination of TeachLivE using ideas from Aristotle and HeideggerUSA
Hayes et al.(2013). Analysis of Teachlive's ludic elementsUSA
Kaufman & Ireland (2016). A conceptual overview of how situational simulations (i.e., scenario/role-play, standardized patients, computer-based) can benefit teacher preparation programs.CANADA
McGarr (2021).A critical analysis of the use of virtual simulations in teacher preparation programs.IRELAND
Literature Reviews: 7
Ade‐Ojo et al. (2022). Review of physical/virtual simulations for PSTENGLAND
Bondie et al. (2021). Review of the virtual human in simulation design exploring simulation design models, interactor training, and the attention researchers gave to language and race in the virtual classroomUSA
Dieker et al. (2023).Review of MRS research of the last five years examining the uses, practices, and outcomes of MR simulation in teacher preparationUSA
Ersozlu et al. (2021). Review the types of TeachLivETM research carried out since its inception and identify trends and potential gaps.AUSTRALIA
Lindberg & Jönsson (2023). Review of competencies developed (rule management, didactic, relational competence, etc) in teachers/special education using human and computer simulationsSWEDEN
Qualls et al. (2024). Review of special education MRS studiesUSA
Yilmaz & Hebebci (2022). Review of virtual simulations (i.e., simSchool, TeachLivE) for teacher trainingTURKEY

Literature Cited in Appendix But Not in Body of Article

Ade‐Ojo, G. O., Markowski, M., Essex, R., Stiell, M., & Jameson, J. (2022). A systematic scoping review and textual narrative synthesis of physical and mixed‐reality simulation in pre‐service teacher training. Journal of Computer Assisted Learning, 38(3), 861–874. https://doi.org/10.1111/jcal.12653

Baird, L., Holland, P., & Deacon, S. (1999). Learning from action: Imbedding more learning into the performance fast enough to make a difference. Organizational Dynamics, 27(4), 19–32. https://doi.org/10.1016/S0090-2616(99)90027-X

Baker-White, E. (2021). “This is blackface”: Inside the virtual reality company trying to scale diversity training. BuzzFeed News.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1037/0033-295X.84.2.191

Bandura, A. (1989). Human agency in social cognitive theory. American Psychologist, 44(9), 1175–1184. https://psycnet.apa.org/doi/10.1037/0003-066X.44.9.1175

Bandura, A. (1997). Self-efficacy: The exercise of control. W.H. Freeman and Company.

Barmaki, R., & Hughes, C. E. (2018). Embodiment analytics of practicing teachers in a virtual immersive environment. Journal of Computer Assisted Learning, 34(4), 387–396. https://doi.org/10.1111/jcal.12268

Bautista, N. U., & Boone, W. J. (2015). Exploring the impact of TeachME Lab virtual classroom teaching simulation on early childhood education majors’ self-efficacy beliefs. Journal of Science Teacher Education, 26(3), 237–262. https://doi.org/10.1007/s10972-014-9418-8

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42. https://doi.org/10.3102/0013189X018001032

Cai, J. (2003). What research tells us about teaching mathematics through problem solving. Research and issues in teaching mathematics through problem solving, 241–254.

Cao, Q., Yu, H., Charisse, P., Qiao, S., & Stevens, B. (2023). Is high-fidelity important for human-like virtual avatars in human computer interactions?. International Journal of Network Dynamics and Intelligence, 15–23.

Carbonell, K.B.; Stalmeijer, R.E.; Könings, K.D.; Segers, M.; van Merriënboer, J.J.G. (2014). How experts deal with novel situations: A review of adaptive expertise. Educational Research Review, 12, 14–29. https://doi.org/10.1016/j.edurev.2014.03.001

Charney, R. S. (1993). Teaching children to care: Management in the responsive classroom. Northeast Foundation for Children.

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press. https://doi.org/10.7551/mitpress/12255.001.0001

The Danielson Group. (2022). The framework for teaching: 2022 edition. The Danielson Group. https://www.danielsongroup.org

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. https://doi.org/10.1287/mnsc.35.8.982

Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. D.C. Heath & Co Publishers.

Doğan, D., Yiğit, MF, Alır, A., Fidan, A., Özbay, Ö., & Tüzün, H. (2019). Examining the views of prospective teachers on the use of a teacher education simulation. Pamukkale University Journal of Education Faculty , 46 (46), 150–174. https://doi.org/10.9779/pauefd.450501

Ford, D. J., Luke, S. E., Vaughn, S. M., & Fulchini-Scruggs, A. (2023). Virtual simulations to practice whole group discussions: Preservice teachers’ metacognitive awareness. Journal of Educational Technology Systems, 52(1), 73–95. https://doi.org/10.1177/00472395231184566

Fraser, D. W., Marder, T. J., deBettencourt, L. U., Myers, L. A., Kalymon, K. M., & Harrell, R. M. (2020). Using a mixed-reality environment to train special educators working with students with autism spectrum disorder to implement discrete trial teaching. Focus on Autism and Other Developmental Disabilities, 35(1), 3–14. https://doi.org/10.1177/1088357619844696

Freire, P. (2000). Pedagogy of the oppressed (new rev. 30th-anniversary ed.) New York: Continuum.

Gee, J. P. (2007). What video games have to teach us about learning and literacy. New York, NY: Palgrave Macmillan.

Graeber, A. (1999). Forms of knowing mathematics: What preservice teachers should learn. Educational Studies in Mathematics, 38(1), 189–208. https://doi.org/10.1023/A:1003624216201

Grossman, P., Compton, C., Igra, D., Fonfeldt, M., Shahan, E., & Williamson, P. W. (2009). Teaching practice: A cross-professional perspective. Teachers College Record, 111(9), 2055–2100. https://doi.org/10.1177/016146810911100905

Heath, M., & Mishra, P. (2023). Generative AI, possibilities, promises, perils, practices, and policy. National Technology Leadership Summit meeting.

Jeldres, R., & Beach, P. V. (2023). Simulaciones de liderazgo escolar: Una visión panorámica de sus publicaciones y evolución. REICE: Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación, 21(4), 45–63. https://doi.org/10.15366/reice2023.21.4.003

Kavanagh, S. S., Metz, M., Hauser, M., Fogo, B., Taylor, M. W., & Carlson, J. (2020). Practicing responsiveness: Using approximations of teaching to develop teachers’ responsiveness to students’ ideas. Journal of Teacher Education, 71(1), 94-107. https://doi.org/10.1177/0022487119841884

Kidman, G, Chang, C.H,  Wi. (2019).  Defining education for sustainability (EfS): A Theoretical Framework. In Issues in Teaching and Learning of Education for Sustainability. Routledge, 1–14. https://doi.org/10.4324/9780429450433-1

Kirschner, P. A., Martens, R. L., & Strijbos, J. W. (2004). CSCL in higher education? A framework for designing multiple collaborative environments. In What We Know About CSCL And Implementing It in Higher Education, (2), 3–30. https://doi.org/10.1007/1-4020-7921-4_1

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice Hall.

Land, R. (2016). Toil and Trouble: Threshold concepts as a pedagogy of uncertainty. In Threshold Concepts in Practice. Rotterdam: SensePublishers, 11–24. https://doi.org/10.1007/978-94-6300-512-8_2

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. https://doi.org/10.1017/CBO9780511815355

Ledger, S., Ersozlu, Z., & Fischetti, J. (2019). Preservice teachers’ confidence and preferred teaching strategies using TeachLivE™ virtual learning environment: A two-step cluster analysis. EURASIA Journal of Mathematics, Science and Technology Education, 2019, 15(3), em1674. https://doi.org/10.29333/ejmste/102621

Lemke, J. L. (1990). Talking science Language, learning, and values. Ablex.

Love, B. L. (2019). We want to do more than survive: Abolitionist teaching and the pursuit of educational freedom. Beacon press.

Moore, S., Hedayati-Mehdiabadi, A., Law, V., & Kang, S. P. (2024). The change we work: Professional agency and ethics for emerging AI technologies. TechTrends, 68(1), 27–36. https://doi.org/10.1007/s11528-023-00895-1

Nel, C., & Marais, E. (2023). Pre-service teachers’ perceptions on eliciting learners’ knowledge in a mixed-reality simulation environment. Reading & Writing, 14(1), 1–9. https://doi.org/10.4102/rw.v14i1.422

Norman, D. A. (2013). The design of everyday things. Basic Books

Norton, B. (2000). Identity and language learning: Gender, ethnicity and educational change. Harlow: Pearson Education/Longman.

O’Brien, S. (2023). Enacting remote and flexible learning placements during a global pandemic: A case report. Sustainability, 15(10), 8049. https://doi.org/10.3390/su15108049

Pankowski, J., & Walker, J. T. (2016). Using simulation to support novice teachers’ classroom management skills: Comparing traditional and alternative certification groups. Journal of the National Association for Alternative Certification, 11(1), 3–20.

Pas, E. T., Johnson, S. R., Larson, K. E., Brandenburg, L., Church, R., & Bradshaw, C. P. (2016). Reducing behavior problems among students with autism spectrum disorder: Coaching teachers in a mixed-reality setting. Journal of Autism and Developmental Disorders, 46, 3640–3652. https://doi.org/10.1007/s10803-016-2898-y

Pas, E.T., Waasdorp, T.E. & Bradshaw, C.P. (2019). Coaching teachers to detect, prevent, and respond to bullying using mixed reality simulation: An efficacy study in middle schools. International Journal of Bullying Prevention 1, 58–69. https://doi.org/10.1007/s42380-018-0003-0

Petterson, Adrian, Keith Cheng, and Priyank Chandra. (2023). Playing with power tools: Design toolkits and the framing of equity. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1-24. https://doi.org/10.1145/3544548.3581490

Pickering, A. (Ed.). (1992). Science as practice and culture. University of Chicago Press. https://doi.org/10.7208/chicago/9780226668208.001.0001

Piro, J.S., & O’Callaghan, C. (2019). Journeying towards the profession: Exploring liminal learning within mixed reality simulations. Action in Teacher Education, 41(1), 79–95. https://doi.org/10.1080/01626620.2018.1534221

Rasimah, C. M. Y., Ahmad, A., & Zaman, H. B. (2011). Evaluation of user acceptance of mixed reality technology. Australasian Journal of Educational Technology, 27(8), 1369–1387. https://doi.org/10.14742/ajet.899

Schön, D. (1983). The reflective practitioner: How professionals think in action. New York, NY: Basic Books.

Sebele-Mpofu, F. Y. (2020). Saturation controversy in qualitative research: Complexities and underlying assumptions. A literature review. Cogent Social Sciences, 6(1), 1838706. https://doi.org/10.1080/23311886.2020.1838706

Smith, B., & McGannon, K. R. (2018). Developing rigor in qualitative research: Problems and opportunities within sport and exercise psychology. International Review of Sport and Exercise Psychology11(1), 101–121. https://doi.org/10.1080/1750984X.2017.1317357

Thomassen, B. (2009). The uses and meanings of liminality. International Political Anthropology, 2(1), 5–27.

Turner, K., O’Brien, S., Wallström, H., Samuelsson, K., & Uusimäki, S. L. M. (2023). Lessons learnt during COVID-19: making sense of Australian and Swedish university lecturers’ experience. International Journal of Educational Technology in Higher Education, 20(1), 25. https://doi.org/10.1186/s41239-023-00395-5

Vince Garland, K. M., Holden, K., & Garland, D. P. (2016). Individualized clinical coaching in the TLE TeachLivE lab: Enhancing fidelity of implementation of system of least prompts among novice teachers of students with autism. Teacher Education and Special Education, 39(1), 47-59. https://doi.org/10.1177/0888406415600769

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press

Wilkerson Lee, C., Bondurant, L., Sapkota, B., Howell, H., & Lai, Y. (2023). Conceptualizing ethics, authenticity, and efficacy of simulations in teacher education. In Proceedings of the Forty-Fifth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, 2:602–10. University of Nevada, Reno.

Yilmaz, O., & Hebebci, M. T. (2022). The use of virtual environments and simulation in teacher training. International Journal on Social and Education Sciences, 4(3), 446-457. https://doi.org/10.46328/ijonses.376

Loading