The impact of generative artificial intelligence (genAI) on teaching and learning has been a central topic of conversation since the public release of ChatGPT in November 2022. GenAI offers numerous potential benefits to education, including personalized instruction, instant access to boundless information, and increased efficiency for task completion (Hu & Shao, 2025). At the same time, it raises significant concerns related to academic honesty, authorship, and the development of critical thinking (Larson et al., 2024). It also raises broader ethical concerns about data privacy, user safety, and the spread of misinformation (Dwivedi et al., 2023). Collectively, these opportunities and challenges are disrupting traditional instructional practices of the past and ushering education into a new era.
Within this evolving landscape, educators may hold diverse perspectives on the use and implications of genAI in educational contexts (Ghimire & Edwards, 2024; Rogers, 2003). In response, researchers have developed theoretical frameworks in efforts to guide the adoption and instructional use of genAI tools for instructional purposes. However, even with the rapid development of these frameworks, critical gaps remain that must be addressed to ensure the effective and responsible integration of genAI in education. To address these gaps, scholars from multiple universities gathered for the National Technology Leadership Summit (NTLS) in Washington, DC, during the fall of 2025, working in small groups to propose alternative frameworks. This editorial shares the process and frameworks that were developed at NTLS.
Methodology for Creating Frameworks at NTLS
The process began with the NTLS genAI strand leaders, Drs. Cherner and Trust, reviewing previously published frameworks focused on the use of genAI in educational contexts. The appendix provides a sampling of the frameworks examined.
Next, the strand leaders asked participants to identify areas already addressed within the existing frameworks and to analyze the design elements underpinning the development of each framework. In discussing design elements, the strand leaders pointed out how visual elements such as arrows and other markers were used to highlight certain attributes and illustrate conceptual flow. They also noted the strategic use of colors to accentuate key terms, define borders, and increase accessibility. Subsequently, participants were provided with a slide deck to further investigate the selected frameworks within their small groups.
Based on the review, strand leaders facilitated a whole-group discussion focused on the substantive topics addressed by the selected frameworks and the design strategies that stood out to them. This review enabled participants to identify gaps as well as envision how a new framework might be designed to address them.
The second stage of the strand deliberations centered on ideation. During this stage, strand leaders provided space and time for participants to collaboratively brainstorm and prototype new frameworks aimed at addressing the identified gaps. Participants self-selected into groups of three to five members. Within these groups, they discussed the strengths and shortcomings of the existing frameworks and began prototyping an original framework to address the identified gaps.
In the third stage, each small group delivered a brief lightning presentation outlining its proposed framework, followed by 5 minutes of structured feedback from the other participating groups. These presentations enabled each group to gather external insights and considerations, allowing for further refinement.
Five Emerging Frameworks for genAI
In this section, we present the frameworks generated by each group. For consistency, each group was provided a general structure that included (a) an introduction that explains the need for the framework, (b) an overview of the framework, (c) a discussion of use cases for the framework, and (4) research recommendations to further develop the framework.
Framework 1
Cognitive Deskilling Prevention Model: A Pedagogical Approach to genAI
Contributors: Amanda Hurlbut, Daniel G. Krutka, and Xiangquan Yao
Since the introduction of ChatGPT in 2022, educators have wrestled with what role this emerging technology should play in educational contexts. However, we believe contextualizing AI in a longer history can help think about it in generative ways.
AI was coined by John McCarthy as a marketing term to attract funding for researchers in 1956, but thinking of genAI as an automation technology for cognitive processes contextualizes it into other histories. For example, Nicholas Carr opens his 2014 book on automation, The Glass Cage, by discussing how computer technologies have automated many tasks typically completed by commercial pilots, thus deskilling them and leaving them less prepared to respond in emergency situations. Recent research has named the phenomenon of genAI deskilling human cognition with terms such as “cognitive offloading” (Gerlich, 2025) and “cognitive automation” (Rinta-Kahila et al., 2023).
Our concerns with this topic are amplified by further research showing a range of societal problems (e.g., depression, polarization, and lowered intelligence) that are potentially correlated with the widespread adoption of smartphones in the 2010s (Haidt, 2024; Twenge, 2023). In 15 years, will we regret failing to confront the downsides of genAI in similar ways? In this framework, we contend that cognition and learning are complex processes that can be undermined by genAI in both the short and long term, and educators must center deep reflection on the process and aims of education to ensure students and teachers are not deskilled by genAI.
Further Background
Scholars have long contended that media can affect humans’ very process of thinking. Reading books, for example, tends to cultivate sustained attention and slow, logical thinking processes. In contrast, screen media — from television to social platforms like TikTok — tend encourage fast-paced, emotional reactions as the audio-visual medium moves at its own pace (McLuhan, 1964; Postman, 1985). As Rheingold (2012) put it, critics have claimed that “Google is making us stupid, Facebook is commoditizing our privacy, or Twitter is chopping our attention into micro-slices” (p. 1).
Similarly, Carr opened his 2010 book, The Shallows, by sharing his growing struggle with sustained, focused reading. He contended that the fragmented structure of reading on the Internet through short bursts, interrupted by hyperlink clicks, had caused his brain’s neuroplasticity to develop habits of distraction. Other scholars have echoed these concerns, suggesting that the widespread use of smartphones has further deepened the problems (Mark, 2023; Twenge, 2023). Viewed through this lens, ChatGPT functions not merely as a tool, but as a communication environment with several key characteristics.
Model and Explanation
The Cognitive Deskilling Prevention Framework offers a model designed to help educators and learners make intentional and ethical choices about when and how to use genAI in learning contexts. As AI tools are increasingly integrated into educational practices, it is important to discern which cognitive tasks should remain human-driven and which might be augmented by automation. This framework uses a red–yellow–green metaphor to illustrate varying levels of caution:
- Red identifies tasks that should not be offloaded to genAI due to their central role in developing critical thinking and creativity
- Yellow represents tasks that might (or might not) benefit from limited genAI support when used judiciously
- Green highlights tasks that can be safely automated with genAI to enhance efficiency or be additive without undermining cognitive processes.
Together, these categories promote balanced and ethical genAI integration that safeguards human cognition and supports meaningful learning.
Figure 1
Cognitive Deskilling Prevention Framework

Red: Do Not Offload.The red-light designation identifies core instructional and learning practices that represent essential dimensions of human cognitive processing. These tasks form the foundation of human capacity for critical thinking, problem-solving, creativity, reasoning, argumentation, persuasion, and decision-making. Such skills are cultivated over time through deliberate practice, feedback, reflection, and cognitive integration. When these processes are offloaded to genAI, there is a risk of cognitive atrophy, an erosion of the mental capacities that sustain higher order thinking. For instance, when students are tasked with composing an original essay, they engage in identifying a central thesis, organizing supporting arguments, incorporating evidence, and communicating effectively through the conventions of written language. Delegating this process to genAI would undermine the development of original thought and coherent reasoning, both of which are fundamental to intellectual growth.
Yellow: Use With Caution.The yellow-light classification signals the need for caution when genAI is employed to perform tasks that blend automated output with human cognitive processing. For example, when composing an original essay, a student might use genAI to generate preliminary ideas or an outline. While such assistance can be useful, the processes of brainstorming and organizing ideas are critical components of creative and analytical development that enable learners to construct original arguments and engage in critical thinking. Furthermore, brainstorming is not always a solitary endeavor and often occurs through dialogue and feedback from teachers or peers. In this sense, consulting genAI for preliminary idea generation may mimic the process of seeking input from a collaborator, provided that human cognition remains primary. Within the yellow-light classification, human intellectual engagement should always take precedence over automation. GenAI should function as an aid that supports, rather than supplants, authentic learning and creative thought.
Green: Acceptable Offload.The green-light category represents conditions where delegation or automation is more warranted, particularly for tasks that are routine, procedural, mechanical, or low-stakes and, therefore, pose minimal risk to cognitive or creative development. This category applies most appropriately when genAI is used to perform activities that do not compromise essential thinking skills and instead enhance overall efficiency, thereby freeing time for deeper intellectual and creative engagement. For example, when students use digital tools to complete project work, they are often required to consent to a terms of service agreement. Rather than passively clicking “agree,” students can use genAI to summarize the document’s key elements, ensuring they understand it prior to providing such consent. In this case, genAI is a valuable tool that extends comprehension and supports engagement in tasks that might otherwise be overlooked, thereby complementing rather than replacing cognitive effort. Additive cases where educators or students utilize genAI for tasks that they are unlikely to be able to do can be warranted, too. For example, a teacher might translate class materials into multiple languages and then work from the translated text to check for accuracy.
Example Use Cases
The following section provides ideas for students and teachers to use the Cognitive Deskilling Prevention Framework.
Students
Red: Do Not Offload. A high school student is assigned to write a persuasive essay about the impact of social media on mental health. Tempted by efficiency, the student considers asking a genAI tool to generate the essay in its entirety. While this approach may yield a final product, it bypasses the core learning objective: developing the ability to construct an argument through reasoning, evidence, and written expression. Writing is not merely about producing text; it is an act of thinking that refines a student’s capacity to analyze perspectives, weigh evidence, and communicate meaning. Offloading this process to AI prevents the student from exercising these higher order cognitive skills and, over time, risks diminishing their ability to engage in critical and original thought.
Yellow: Use with Caution.In a 1st-year composition course, a college student uses a genAI tool to help brainstorm possible thesis statements or create a preliminary outline for an essay on climate change policy. The genAI tool offers several organizational options and examples of how the argument might flow. The student then selects one approach, refines it, and develops the essay independently. This is a productive yellow-light scenario, as genAI is a brainstorming partner, not a substitute for thinking. The student’s role remains central: evaluating ideas created by genAI, determining their relevance, and shaping them into an original argument. This reflective use of genAI supports creativity and organization without undermining authentic intellectual engagement.
Green: Offload Acceptably. A teacher education student has finished analyzing original research for a final paper. To review the paper and the appendices, the student uses genAI to identify potential improvements, such as ensuring consistency in citation style and correcting formatting errors (e.g., hanging indents or italicization). This form of feedback can help students reflect on suggested revisions and strengthen their understanding of academic writing conventions. By offloading more mechanical elements of writing to AI, the student can streamline the revision process while maintaining their focus on content and analysis.
Teachers
Red: Do Not Offload. An English teacher, facing a large volume of essays to grade, considers using genAI to create personalized feedback for each student without independently reviewing the papers. Although the technology could produce grammatically accurate and even encouraging comments, it would eliminate the teacher’s essential evaluative role. Effective feedback is inherently relational — it requires professional judgment, awareness of student progress, and contextual understanding of curricular alignment. Offloading this process risks eroding the teacher’s ability to understand the connected nature of student performance with instructional effectiveness. The red-light designation underscores that the human dimension of assessment must remain intact to preserve authenticity and instructional integrity.
Yellow: Use with Caution. A middle school science teacher writes a draft lesson on renewable energy. After completing the instructional sequence, the teacher uses AI to identify relevant state standards, reorganize the lesson into a consistent format, and generate optional differentiation extensions. The teacher evaluates these additions and integrates only those that strengthen the lesson. AI supports efficiency and organization, while the core instructional design remains teacher driven. The teacher maintains cognitive authority by curating, adapting, and refining the content to meet specific instructional goals, exemplifying the principle of human oversight first, automation second.
Green: Offload Acceptably. An elementary teacher uses AI to batch-convert word-processing files into PDFs, rename them consistently, and generate folder labels for organizing units. These are purely procedural tasks that involve no pedagogical reasoning or communication with families. Automating them saves time without meaningful cognitive downsides. This use of genAI offloads productivity tasks unrelated to effective teaching.
References for Framework 1
Carr, N. G. (2010). The shallows: What the Internet is doing to our brains. W. W. Norton.
Carr, N. G. (2014). The glass cage: Automation and us. W. W. Norton.
Haidt, J. (2024). The anxious generation: How the great rewiring of childhood is causing an epidemic of mental illness. Penguin Press.
Mark, G. (2023). Attention span: A groundbreaking way to restore balance, happiness and productivity. Hanover Square Press.
McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
Postman, N. (1985). Amusing ourselves to death: Public discourse in the age of show business. Viking Penguin.
Rheingold, H. (2012). Net smart: How to thrive online. MIT Press.
Twenge, J. M. (2023). Generations: the real differences between Gen Z, Millennials, Gen X, Boomers, and Silents—and what they mean for America’s future. Simon and Schuster.
Framework 2
Stoplight Matrix for Pedagogical and Social Implications of AI
Contributors: Torrey Trust, Marie K. Heath, Chrystalla Mouza, Jon M. Clausen, and Daniel G. Krutka
For today’s teachers and students, there is a genAI tool that can do almost any task they desire. Magic School can complete nearly 100 different tasks for teachers, ranging from generating questions aligned to YouTube videos to creating presentations and writing report card comments. Suno can generate music and songs, while Padlet TA can create prepopulated timelines and maps. Google’s NotebookLM creates multimodal overviews of texts, videos, and other sources. General purposes platforms such as ChatGPT, Gemini, Claude, and Perplexity have study modes to provide scaffolded learning experiences. Additionally, Brisk has an AI teaching assistant that can create lesson plans, design rubrics, and write feedback on student work, and it is marketed as saving teachers 10-20 hours a week (M. Miller, personal communication, August 25, 2025).
GenAI tools are being aggressively marketed to teachers and students as time- and energy-saving resources that can help them do anything. Yet these tools also raise significant ethical, societal, and educational issues. In a recent blog post, educator Leon Furze (2024) argued that genAI is not just a neutral tool, but an evil one:
I mean, it’s hard to view a technology that’s inherently racist, classist, sexist, bad for the environment and basically designed to line the pockets of a handful of billionaires and trillionaires as anything but a tool for corporate greed and oppression. …Through a callous disregard for copyright and intellectual and cultural property, AI companies have produced sprawling, great monsters of a technology which devours computational power, fresh water, and a tremendous amount of energy, producing artificial intelligence designed to compete with the very people scraped to create it out of existence. And I mean, come on, that’s evil, right? (para. 2).
At the same time, Furze (2024) noted that genAI can be a helpful tool for many. Furze, like many other educators, is trying to grapple with when, how, whether, and why to use genAI in education.
As Furze (2024) noted, this challenge is further complicated by the reality that the data, design, and infrastructure that power genAI are ethically contested. For instance, GenAI is trained on data that include content obtained without proper authorization from creators (Metz et al., 2024; Reisner & Gilbertson, 2024). In addition, AI companies often exploit human labor in the Global South to “clean” data for less than two dollars an hour. This work can require humans to view graphically sexual and violent images and words generated by AI in order to label them as inappropriate for other users, thus exposing them to potentially psychologically traumatic materials (Bartholomew, 2023; Bender & Hanna, 2025; Perrigo, 2023).
The materials needed to support an AI infrastructure also present significant ethical challenges. AI companies extract the raw materials needed for AI systems in ecologically damaging ways, relying on dangerous practices and often child labor (Crawford, 2021). Additionally, the massive data centers built to host and process the vast amounts of data require substantial quantities of water, electric power, and other natural resources. These facilities are frequently built in places where the most vulnerable and marginalized people live, resulting in disproportionate environmental harms (Quilty, 2025; Wittenberg, 2025).
Finally, technology companies depend on the extensive collection of user data, enabling them to compile detailed profiles of individuals’ habits, preferences, and, in the case of genAI, often their innermost thoughts. Companies use these profiles for profit through targeted advertising, often without the users’ understanding or knowledge, a practice known as surveillance capitalism (Zuboff, 2023).
The design, data, and infrastructure of genAI will always raise significant ethical concerns (barring major legislative reform). At the same time, we recognize that we live in a society where purely ethical consumption is impossible, and every decision we make, from listening to Spotify to purchasing an iPhone, has (un)ethical repercussions. However, it is better to acknowledge the harms rather than pretend that the technology has been presented for our use in education free from the extractive and harmful practices which are part and parcel of genAI’s design. If we choose to use this technology, we should do so with our eyes wide open.
The use of genAI in schools requires further ethical and critical examination. In the framework presented in the next section, we invite teachers to reflect on what they gain and what they give up through the implementation of genAI in their classrooms. We also invite teachers to consider the ways in which genAI may reproduce societal biases. GenAI systems are trained on data that reflect historical and social inequities. Thus, their outputs can mirror and reinforce those biases. For example, research has documented failures of AI to recognize darker-skinned individuals (Buolamwini, 2024). Similar patterns emerge in educational contexts. For instance, when prompted to score student papers, genAI systems have shown to assign lower scores based on perceived socioeconomic, racial, and other demographic indicators (Warr & Heath, 2025). Moreover, genAI systems may adopt a more forceful or authoritarian tone when offering feedback directed toward students from marginalized racial or socioeconomic backgrounds (Warr & Heath, 2025).
In these ways, genAI technologies risk reinforcing the marginalization and oppression embedded within broader societal structures. Therefore, we ask teachers to weigh the benefits of genAI against any pedagogical costs, as well as significant ethical implications inherent to the technology. Our framework is designed to support teachers and teacher educators in critically examining these considerations and making informed decisions about when, and when not, to use genAI in their practice.
Introducing the GenAI Use in Teaching and Learning Matrix
The GenAI Use in Teaching and Learning Matrix (Figure 2) provides teachers with a decision-making tool for exploring, examining, and discussing with students the potential use of genAI technologies for educational purposes.
Figure 2
GenAI Use in Teaching and Learning Matrix

The design of the matrix builds on the common stoplight model for guiding student AI use (Mormando, 2023), featuring green, yellow, and red colors as a visual signal for determining when to use a particular genAI tool. Rather than relying on the traditional green, yellow, red stoplight model, though, we opted to create a matrix that adds multidimensionality and nuance to the decision-making process. This process recognizes that decisions around the use of genAI tools rarely lend themselves to simple yes or no answers. The matrix features two axes — the Y axis focuses on the pedagogical impacts for using a genAI technology, and the X axis focuses on its societal impacts (an earlier version of this framework was introduced at the NTLS 2024 meeting; see Mouza et al., 2024).
The Pedagogical Use axis centers on the impact of using a genAI tool for teaching and learning. Because the way a teacher decides to use a tool shapes the pedagogical impact, educators need to go beyond examining the purported benefits of each tool to how they will actually use it in their practice. As such, we offer a list of questions teachers might reflect upon as they assess the pedagogical impact of genAI tools:
- How would this tool be used? Would the tool replace, amplify, or transform teaching? Would students use the tool in passive, interactive, or creative ways? (See Kimmons et al.’s, 2020, PICRAT model)
- How will the use of this tool shape students’ achievement of the learning objectives?
- How will the use of this tool shape student agency?
- How will the use of this tool shape student motivation and engagement?
- Does the tool encourage metacognitive practices and higher order thinking?
- What would learning be like in this activity without the use of the genAI tool? Would removing the genAI tool from the activity negatively or positively impact learning?
- Would the use of the genAI tool foster an inclusive learning community? Would the use of the AI tool acknowledge and uplift marginalized voices? Would it add to, or distract from, the ability to have hard conversations? Would the use of the AI tool positively or negatively influence the relationship between the teacher and student?
- Does the tool support multilingual learning?
- Is the tool developmentally appropriate?
Using these questions as a starting point for investigation and reflection, educators can then determine the potential pedagogical impact of a genAI tool on a continuum — from negative to positive.
The second part of the matrix refers to the Societal Impact, which centers on the broader impacts of using the genAI tool. Simply determining whether to use a genAI tool based on its pedagogical impact does not adequately address the complex, ethical issues surrounding these technologies. To determine this impact, we encourage educators and students to collaboratively examine and discuss the societal and environmental impacts of a genAI tool before using it for teaching and learning. Questions educators and students might consider are as follows:
- What guardrails and safety measures are in place for student use of this tool?
- What data was this tool trained on? Who was included in the training data? Who was left out?
- What is the environmental impact of the tool and how transparent is the developer about its environmental impact?
- What communities or groups might be disproportionately harmed by the environmental impact of using this tool?
- How might the tool reproduce societal stereotypes and biases?
- Does the use of the tool contribute to the deprofessionalization of teaching or the automation of work?
- What data is collected, shared, and sold when using this tool?
- How trustworthy, reliable, and accurate is this tool?
- Is this tool free? Why might that be? What nonmonetary costs are associated with the use of this tool?
- If the tool prompts you to provide feedback, should you do so, and offer your free human labor to help improve the tool?
- Should the tool be cited as an original source given if it was trained on copyrighted data without permission from or compensation to the original content creators?
- How do the developer’s business practices (e.g., exploitation of labor, resource extraction from other countries) impact communities around the world?
To answer these, and other relevant questions, students and teachers can collaboratively research and investigate the genAI tool, reading its website, any marketing materials, the terms of use/service, and privacy policies very closely, and exploring news articles and reviews of it.
After examining the potential pedagogical and societal impact of a genAI tool, teachers and students can identify where on the matrix the tool falls. The bottom left corner (red) indicates that the genAI tool does not offer a positive pedagogical or societal impact. The top right corner (green) indicates that the genAI tool offers a positive pedagogical and societal impact. The middle of the matrix (yellow) indicates that the genAI tool either has a positive or negative pedagogical or societal impact and caution should be taken before deciding whether to use it. We imagine that most uses of genAI tools for teaching and learning will fall in the yellow areas of the matrix, indicating that there is no easy “let’s use” or “let’s not use” answer when it comes to genAI tools. Instead, it is up to the teacher, ideally in collaboration with students, to make an informed decision about the use of the genAI tool.
Overall, the matrix is designed to encourage teachers and students to engage in a more nuanced investigation and discussion of genAI tools before using them for educational purposes. Rather than relying solely on developers’ claims that a tool will save time or improve learning outcomes, the matrix promotes a deeper examination of the broader pedagogical, educational, environmental, and societal impacts associated with each AI tool.
References for Framework 2
Bartholomew, J. (2023, August 29). Q&A: Uncovering the labor exploitation that powers AI. Columbia Journalism Review. https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php
Bender, E. & Hanna, A. (2025). The AI con: How to fight big tech’s hype and create the future we want. Harper.
Buolamwini, J. (2024). Unmasking AI: My mission to protect what is human in a world of machines. Random House.
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Furze, L., (2024, November 20). It’s uncomfortable on the fence but at least the view is nice. Leon Furze Blog. https://leonfurze.com/2024/11/20/its-uncomfortable-on-the-fence-but-at-least-the-view-is-nice/
Kimmons, R., Graham, C. R., & West, R. E. (2020). The PICRAT model for technology integration in teacher preparation. Contemporary Issues in Technology and Teacher Education, 20(1), 176-198. https://citejournal.org/volume-20/issue-1-20/general/the-picrat-model-for-technology-integration-in-teacher-preparation
Metz, C., Kang, C., Frenkel, S., Thompson, S., Grant, N. (2024, April 6). How tech giants cut corners to harvest data for AI. The New York Times. https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.tU8.yI2M.w0DLAVwlXBGf
Mormondo, S. (2023, November 9). A stoplight model for guiding Student AI usage. Edutopia. https://www.edutopia.org/article/creating-ai-usage-guidelines-students/
Mouza, C. (2024) A report on NTLS 2024: Advancing equity and policy intersections at the intersection of technology and teacher education. Contemporary Issues in Technology and Teacher Education, 24(4) https://citejournal.org/volume-24/issue-4-24/editorial/a-report-on-ntls-2024-advancing-equity-and-policy-intersections-at-the-intersection-of-technology-and-teacher-education
Perrigo, B. (2023, January, 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Reisner, A. & Gilbertson, A. (2024, July 16). Apple, Nvidia, Anthropic used thousands of swiped YouTube videos to train AI. Wired. https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/
Quilty, E. (2025, September 17). Centers, deep sea cables, and colonial legacies in the South Pacific. Data & Society. https://datasociety.net/points/confronting-data-centers-deep-sea-cables-and-colonial-legacies-in-the-south-pacific/
Warr, M., & Heath, M. K. (2025). Uncovering the hidden curriculum in Generative AI: A reflective technology audit for teacher educators. Journal of Teacher Education, 76(3), 245-261.
Wittenberg, A. (2025, May 6). ‘How come I can’t breathe?’: Musk’s data company draws a backlash in Memphis. Politico. https://www.politico.com/news/2025/05/06/elon-musk-xai-memphis-gas-turbines-air-pollution-permits-00317582
Zuboff, S. (2023). The age of surveillance capitalism. In W. Longhofer & D. Winchester (Eds.), Social theory re-wired (pp. 203-213). Routledge.
Framework 3
SLIDE – Student Learning in Dialogue With Educational AI
Contributors: Jonathan D. Cohen, Denise Crawford, Bruna Damiana Heinsfeld, Mia Kim Williams
The call for an AI framework to clarify educator agency versus tool-driven decision-making emerges from the increasing complexity of educational design in an AI-mediated world. Generative AI for purposes of educational use and value requires educators to make pedagogical and ethical decisions. At times, educators’ decision-making processes involve cognitive conflict as they struggle to understand the affordances and constraints of using AI in educational contexts. This tension reflects what systems thinkers describe as dynamic complexity (Senge, 2006) — situations in which cause and effect are subtle and interdependent. Decisions made in one part of the system can have ripple effects across others. Grounded in systems thinking theory, such a framework recognizes that educational ecosystems are composed of interconnected humans (i.e., teacher educators and preservice teachers) and technological actors (i.e., genAI tools) whose interactions generate continuous feedback loops that influence roles, learning outcomes, and ethical boundaries (Meadows, 2008).
When determining how to integrate AI meaningfully into teaching and learning, educators can face complex choices, mediated by multiple pedagogical, logistical, and societal factors. Without clear guidance, their professional judgment risks being overshadowed by automation. In classrooms, AI can either amplify or constrain human capacity, empowering teachers through data-informed insights and adaptive resources or reducing their role to mere overseers of automated processes (Hamilton et al., 2023; Kucirkova & Gray, 2023). A systems-oriented framework helps educators maintain meaningful agency by clarifying which decisions remain human led, which are shared, and where automation can genuinely enhance learning through responsible interdependence (Pedersen & Duin, 2022). Moreover, by embedding ethical principles, such as fairness, transparency, and bias mitigation, frameworks protect against algorithmic inequities while upholding trust and accountability within democratic, learner-centered education (Capraro et al., 2023; Perc et al., 2019).
Introducing The SLIDE – Students Learning in Dialogue with Educational AI Framework
SLIDE – Students Learning in Dialogue with Educational AI (Figure 3) is a framework designed as an interactive tool to assist educators in determining what level of AI involvement they want in planning, designing, and informing practice. SLIDE does not assume an educator is already AI literate. It is designed to encourage the exploration and application of AI in educational contexts while paying close attention to teacher and student roles, along with ethical considerations. Systems thinking underpins the design and development of the SLIDE framework, as well as its interactive components. In this view, educators, students, and AI systems form an adaptive learning ecosystem in which each component continuously shapes and is shaped by the others. Drawing from Arnold and Wade’s (2015) definition of systems thinking as “understanding how parts interrelate within wholes,” SLIDE positions educator decision-making as both systemic and situational — sensitive to patterns, constraints, and opportunities across contexts.
Figure 3
SLIDE – Students Learning in Dialogue with Educational

The integration of AI into education must center on the human dimensions of teaching and learning. Rather than viewing AI as a replacement for teacher expertise, educators can cultivate a form of knowledge that deepens their capacity through AI interactions and engaging with the tools as collaborators that inform, rather than dictate, pedagogical choices (Richardson et al., 2024). This perspective of a human–AI partnership aligns closely with systems thinking, as both emphasize feedback, adaptation, and the dynamic co-evolution of human judgment and technological capability.
To operationalize these ideas, we drew on two existing models to guide the design of the SLIDE interactive tool: the Human/Computer Control-Automation Levels framework (Sheridan & Verplank, 1978) and the Five Levels of Vehicle Autonomy (National Highway Traffic Safety Administration, 2022). Together, these frameworks provided a structure for examining how decision-making authority can shift between humans and machines in educational contexts. This design logic is consistent with a broader systems approach that maps relationships among technological affordances, institutional norms, and human judgment (Meadows, 2008; Senge, 2006). In doing so, SLIDE positions educators’ strengths — contextual understanding, creativity, and empathy — to work in concert with AI’s capabilities in pattern recognition, real-time analytics, and scalable personalization (Beaudoin, 2022; Chan & Colloton, 2024). Ultimately, frameworks like SLIDE safeguard educator agency, clarify accountability, and structure educator — AI collaboration around shared educational goals, ensuring that AI remains a transparent and inclusive partner rather than a black box replacing human expertise (Baarr, 2022; Schoeller et al., 2021).
Use Cases
The following sections present use cases aligned with the field of teacher education.
Preservice Teacher Decision-Making Scaffold. In supporting preservice teachers’ decision-making about whether and how to use AI in their productivity or instruction, they can explore the affordances and constraints of using AI for a particular purpose in relation to learning. SLIDE functions as a reflective scaffold that encourages inquiry rather than compliance. Preservice teachers can use it to systematically analyze the affordances and constraints of applying AI for specific pedagogical goals, such as differentiating instruction, generating formative feedback, or managing administrative tasks in relation to how these uses impact student learning.
Teacher Educator Modeling of AI Integration. Teacher educators can use the framework to model decision-making or describe how AI use is integrated into teaching and learning examples specific to their content. For instance, during instruction, teacher educators might walk preservice teachers through their reasoning for using AI to design an assessment, emphasizing the pedagogical logic behind each choice. By verbalizing this decision process and referencing the framework’s categories of human–AI collaboration, teacher educators not only illustrate best practices but also cultivate a culture of critical dialogue around AI use in education.
Student Transparency-Standardized Communication of AI Use. When students use AI as a part of their learning process, this model establishes categories to communicate the relationship of human agency and the level of AI work used. By identifying the relationship between human agency and the level of AI involvement, students can clearly articulate whether AI functions as a brainstorming partner, an analytical assistant, or an automated content generator. Such transparency promotes academic integrity and helps educators evaluate the learners’ thinking and the appropriateness of the AI’s contribution.
Recommendations for Future Research and Further Development
Future research and development of the SLIDE framework in education should prioritize empirical validation and iterative refinement across diverse educational contexts. One essential next step involves conducting playtests with different target audiences, such as preservice teachers, teacher educators, and learners, to evaluate the framework’s applicability, clarity, and adaptability to varying pedagogical environments. These tests would provide valuable insights into how users from different educational backgrounds interpret and operationalize the framework, ultimately informing revisions that enhance its relevance and practicality.
Subsequent iterations should focus on refining and improving the framework’s tasks, informational components, and visual design. Special attention must be given to ensuring usability, accessibility, and aesthetic coherence, allowing for more intuitive interaction and equitable access for users with diverse abilities and technological proficiencies. Enhancing the framework’s communicative clarity and visual scaffolding can also support its integration into teacher education and professional development programs.
Ultimately, a structured pilot study should be conducted to assess the framework’s pedagogical applicability and theoretical foundation. This process should include both empirical testing in classroom or training settings and a systematic exploration of the supporting literature to strengthen the conceptual underpinnings of the framework. Such research would contribute to building a robust evidence base for its adoption and guide future iterations that align with evolving discussions around ethics, agency, and critical engagement with AI in education.
References for Framework 3
Arnold, R. D., & Wade, J. P. (2015). A definition of systems thinking: A systems approach. Procedia Computer Science, 44, 669-678.
Başarır, L. (2022). Modelling AI in architectural education. Gazi University Journal of Science, 35(4), 1260-1278. https://doi.org/10.35378/gujs.967981
Beaudoin, M. A. (2022). Machine learning control of dynamical systems in electric and autonomous vehicles. McGill University (Canada).
Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., … Viale, R. (2023, December 16). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. https://doi.org/10.31234/osf.io/6fd2y
Chan, C. K. Y., & Colloton, T. (2024). Generative AI in higher education: The ChatGPT effect (p. 287). Taylor & Francis.
Hamilton, A., Wiliam, D., & Hattie, J. (2023). The future of AI in education: 13 things we can do to minimize the damage. https://doi.org/10.35542/osf.io/372vr
Kucirkova, N., & Leaton Gray, S. (2023). Beyond personalization: Embracing democratic learning within artificially intelligent systems. Educational Theory, 73(4), 469-489.
Meadows, D. (2008). Thinking in systems: International bestseller. Chelsea Green Publishing.
Pedersen, I., & Duin, A. (2022). AI agents, humans and untangling the marketing of artificial intelligence in learning environments. Proceedings of the 55th Hawaii International Conference on System Sciences: Human and Artificial Learning in Digital and Social Media. https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/522ac070- 0c6a-4807-93c4-ba7d2d7f3d8a/content
Perc, M., Ozer, M., & Hojnik, J. (2019). Social and juristic challenges of artificial intelligence. Palgrave Communications, 5(1).
Richardson, C., Oster, N., Henriksen, D. & Mishra, P. (2024). Artificial intelligence, responsible innovation, and the future of humanity with Andrew Maynard. TechTrends. 68, 5–11. https://doi.org/10.1007/s11528-023-00921-2
Schoeller, F., Miller, M., Salomon, R., & Friston, K. J. (2021). Trust as extended control: Human-machine interactions as active inference. Frontiers in Systems Neuroscience, 15, 669810.
Senge, P. (2006). The fifth discipline: The art and practice of the learning organization. Century.
Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Man-Machine Systems Lab. doi: 10.21236/ADA057655.
US Department of Transportation. (n.d.). Automated vehicles for safety. National Highway Traffic Safety Administration. https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety
Appendix to Framework 3
The visualization pilot product was created by Anthropic’s Claude AI. The following were the prompts:
I would like an interactive visualization, based on the attached image. As the user moves the sliders up and down, the text in the main text area in the middle changes. The text also changes, based on the user’s choice from the Tasks drop-down menu. On the left is two sliders that the users can slide up and down. The sliders have to move in opposite directions, meaning that if one slider is at the top, the other is at the bottom. There are four stops on each slider. The title is “SLIDE (Student Learning in Dialogue with Educational AI”. In the top right, “Task” is a drop down menu, with the six items underneath Tasks are the choices. At the bottom, there is a space called “AI Involvement”; if the slider for Human is at the top, then AI Involvement should be No AI. If it’s in the second stop from the top, then AI Involvement should be Suggestive Support. At the third stop, AI Involvement should be AI Task Completion. At the bottom stop, AI Involvement should be Full AI. In the main text area in the middle, there should be three sections called Impact on Teacher, Impact on Students, and Ethical Considerations (in the attached file, there is another section called “Tools” but don’t include that). Also, change the word “active” on top of the HUMAN slider to the word “agency” and the one on the bottom to “low agency”. After you build the visualization, tell me how to add the text to match with each slider level and task.
https://claude.ai/public/artifacts/4f5a4930-d131-423d-90b3-31465e6fab93
Make the following changes: 1. Move AI slider to the right of HUMAN slider. 2. Change title to “SLIDE (Student Learning in Dialogue with Educational AI)”. 3. Under “Ethical Considerations” in the main text area, add a new category called “Tools” and generate a list of 3-5 AI tools that users might use to accomplish the task selected from the drop-down menu.
https://claude.ai/public/artifacts/4b9d1935-fc8a-4b31-b3ae-c3c5fc5c5710
Two more changes: 1. Users should be able to move the AI slider, and that slider position should affect the position of the HUMAN slider in the same way that moving the HUMAN affects the position of the AI slider. 2. Move the Task dropdown to above the sliders, even with the AI Involvement line.
https://claude.ai/public/artifacts/ce4b5b65-45ae-438a-9ba6-b2d600b7f1bb
Framework 4
Humanity-Centered Artificial Intelligence
Contributors: Richard E. West, Albert Ritzhaupt, Adam Barger, & Lin Lin Lipsmeyer
In the 15th century, Nicolas Copernicus proposed a heliocentric theory of astronomy that created a significant paradigm shift in science, shifting our study of astronomy from the Earth being at the center of the galaxy to the sun being the center — upending much of research and practice in many branches of science and changing understanding of humanity’s place in the cosmos. This was called the Copernican Revolution. Schneiderman (2020b) has argued that we need a second Copernican-style Revolution in our modern Artificial Intelligence (AI)-infused age. This second revolution would again center humanity at its core — this time, centering human judgment, creativity, empathy, and wisdom at the center of AI-assisted learning and work.
Many scholars who study AI have argued that this refocusing on humans is necessary. Reidl (2019), at the start of the current AI large language model explosion, argued that “algorithms must be designed with awareness that they are part of a larger system consisting of humans” (p. 33) and that those who develop and use these systems need to keep in the forefront of their minds the “issues of social responsibility” that are characteristic of our humanity. Schuurman (2023) noted that AI designers have a moral responsibility to create AI tools that incorporate, or even promote, human values. Similarly, Schmidt (2020) argued that “the central question is how to create these tools for amplifying the human mind without compromising human values” (p. 1). But what are those human values?
Unfortunately, while there has been a renewed interest on human-centered AI development, much of the discussion has not been on human values and instead on keeping AI from causing human harm, focusing on issues such as respecting privacy, decreasing bias, being governed by humans, and preserving human jobs (for examples, see Domfeh, et al., 2022; Garibay, et al. 2023; Schneiderman, 2020a). However, this focus on avoiding AI harms ignores the bigger issue, which is how we, as humans, can flourish and maintain our humanity amid an AI revolution.
As Selwyn (2024) contended, humans must be wary of fitting human enterprises, such as education, around the needs of AI because the “basic aspects of teaching and learning cannot be captured reliably in data form” (p. 7). There is a great need to recognize the uniqueness of humans and emphasize how to use the powerful potential of AI in our daily living while retaining and centering our humanness in our work and learning.
Introducing The Human-Centered AI Framework
The concept of AI/human alignment is recognized as one of the single greatest challenges facing AI researchers and practitioners in creating these advanced technologies for a safe and prosperous human civilization (Gabriel, 2020). Alignment generally refers to the creation of AI systems that ensure technology behaves in a manner that will benefit humanity (Soares & Fallenstein, 2014). That is, AI systems should be aligned with “human interests” (p. 2) from their initial conception to their eventual deployment. Naturally, scholars have proposed frameworks and systems to align the values of humanity with those of machines (Vamplew et al., 2018); however, we do not agree on a single framework to ensure the alignment of “human interests” in these machines.
The Human-Centered AI framework is designed for all types of AI users and addresses core human values and qualities that should be preserved as we utilize AI for the betterment of humanity. It seeks to answer the following guiding question: “How do we maintain and enhance our sense of humanity and values in the use of AI in our personal and professional lives?” Our framework emphasizes core aspects of humanity (e.g., identity) that must be intentionally aligned to AI, and it can be used to help users recognize when they might be misusing AI or letting AI take over too much in their lives. Figure 4 illustrates the framework, highlighting the following constructs necessary to maintain and enhance our sense of humanity: agency, creativity, identity, relationships, wisdom, and discernment.
Figure 4
The Human-Centered AI Framework

We propose six constructs both to inform and guide the use of AI while maintaining humanity (see Table 1). In other words, as individuals use AI, these human qualities should remain paramount. Though these human characteristics are not necessarily sequential, we present them in a sequence that reflects how typical AI use might unfold. First, agency focuses on the human capacity for intentional and goal-directed action. Recent literature suggests agency is an indicator of learner self-regulation and awareness of contextual factors (Roe & Perkins, 2024). Second, creativity must be preserved, both in terms of the value and the process of producing new or novel products. As such, humanity-centered AI use emphasizes humans as the main creative engine for AI-supported work (Marrone et al., 2022). Third, identity emphasizes the need to retain the promotion of an authentic self, rather than supplanting human identity with AI output. Taken together, these three constructs highlight the personal input of human AI users and emphasizes human-only processes while using AI.
The next three values incorporate social and applied aspects of AI use. Relationships, the fourth construct, indicates the need for AI use that connects users with the broader community. We contend that AI use leading to diminished human relationships is a risk worth avoiding, given the potential for social harms due to bias, negative content, and normative output that does not recognize neurodivergence (Selwyn, 2024). Fifth, wisdom to leverage AI in proper contexts and within appropriate boundaries is a practical construct best informed by subject-matter expertise balanced by intellectual limits. Given Hicks et al.’s (2024) conclusion that “LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing,” wisdom must be a construct of humans only (p. 39). Finally, discernment with AI use should include sound human judgment as demonstrated through intentional reflection on the appropriateness of AI processes and output.
As shown in Figure 4, these constructs are additionally informed by centering humans at the output stage. As illustrated in the bi-directional arrows between the AI user and other humans, human networks can act as an aspect of output verification and an influence on AI input.
Table 1
Six Constructs of Humanity-Centered Artificial Intelligence Use
| Construct | Definition |
|---|---|
| Agency | Capacity, autonomy, and volition to make informed decisions based on a felt or real need. |
| Creativity | Creating something new (novel, effective, whole) (see Mishra & Henricksen, 2018). It involves human traits such as curiosity, openness to new ideas, resistance to latching onto ideas too quickly, and abstract thinking (Torrance, 2018). |
| Identity | The use of the AI enhances, rather than impedes, the becoming of an authentic self. |
| Relationships | Inter- and intra-personal and professional connections are enhanced and not diminished in the use of AI. |
| Wisdom | Subject-matter expertise, human intuition, experience, and contextual understanding. |
| Discernment | Perceiving the reality of the problem, judging the appropriateness of the responses, and reflecting on the level of humanity in your use of AI. |
Use Cases
The Humanity-Centered AI framework is designed to help AI users understand how to responsibly and ethically utilize AI in situations that benefit the individual and humanity at large. As such, AI users must operationalize the several noted constructs in choosing when, where, and how to use AI to address their particular use case. We illustrate the application of the framework in a few common use cases reported in the news and research literature.
The casual AI user who is seeking accurate information on a recent news topic of interest. The casual AI user would still need to exercise agency and wisdom in choosing how to prompt the AI for the recent information. This user would need to be aware that the models do not always include recent news information, and discernment would be necessary to make a judgment about the validity of the AI response due to potential bias and hallucinations (i.e., inaccurate information generated by the AI tool which is presented as accurate).
The teacher who uses AI in lesson planning and in giving feedback to students. Teachers need to focus on their own creative judgment about what students need and consider how the use of AI may strengthen or weaken their relationship with the students. For example, will students feel less cared for if they receive feedback on assignments from an AI? Or can the teacher use the AI to provide concrete feedback, and then layer on top of the AI output human wisdom about how the student may use the feedback to improve a paper?
The software engineer who is using AI to generate code to solve a problem in the creation of a software product. A software engineer would need to think creatively about the nature of the problem and how the code will address it using their subject-matter expertise to debug and refactor the code, as needed. This engineer, however, would not want to undermine their competency or jeopardize their professional identity and relationships by allowing the code to drive the solution without sufficient human oversight and input.
The student who uses AI to study for a final exam in a linear algebra college course. The student needs to make responsible decisions about how to utilize AI to support their learning, while also considering the risk of overrelying on the tool (e.g., using AI to write an essay) or avoiding relationships (such as a study group) because of reliance on AI. Exercising discernment involves carefully evaluating whether their use of AI ensures they are mastering the learning materials.
In each of these use cases, the framework and constructs noted are relevant to how the user chooses to utilize AI to address their real or felt needs.
Recommendations for Future Research / Further Development
The Human-Centered AI framework would benefit from input from a cross-disciplinary group of social and psychological researchers to better understand whether the human values in the framework best represent the key elements of what it means to emphasize human qualities while using AI. A Delphi-style study with experts would be beneficial, as would the implementation of the framework in professional settings with ethnographic observation of how it may guide more fulfilling AI use.
References for Framework 4
Domfeh, E. A., Weyori, B., Appiahene, P., Mensah, J., Awarayi, N., & Afrifa, S. (2022). Human-centered Artificial Intelligence, a review. Authorea Preprints. https://doi.org/10.22541/au.166013641.15972664/v1
Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411-437.
Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., … & Xu, W. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3), 391-437.
Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2). https://doi.org/10.1007/s10676-024-09775-5
Marrone, R., Taddeo, V., & Hill, G. (2022). Creativity and artificial intelligence—A student perspective. Journal of Intelligence, 10(3), 65.
Mishra, P., & Henriksen, D. (2018). A NEW definition of creativity. In P. Mishra & D. Henriksen (Eds.), Creativity, technology & education: Exploring their convergence (pp. 17–24). Springer. https://doi.org/10.1007/978-3-319-70275-9_3
Roe, J., & Perkins, M. (2024). Generative AI in self-directed learning: A scoping review. arXiv preprint arXiv:2411.07677.
Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33-36.
Schmidt, A. (2020, September). Interactive human centered artificial intelligence: a definition and research challenges. In Proceedings of the 2020 International Conference on Advanced Visual Interfaces (pp. 1-4). https://uni.ubicomp.net/as/iHCAI2020.pdf
Schuurman, D. C. (2023). Virtue and Artificial Intelligence. Perspectives on Science and Christian Faith, 75(3), 155–161. https://doi.org/10.56315/pscf12-23schuurman
Selwyn, N. (2024). On the limits of artificial intelligence (AI) in education. Nordisk Tidsskrift for Pedagogikk Og Kritikk, 10(1). https://doi.org/10.23865/ntpk.v10.6062
Shneiderman, B. (2020a). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504.
Shneiderman, B. (2020b). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109-124.
Soares, N., & Fallenstein, B. (2014). Aligning superintelligence with human interests: A technical research agenda. Machine Intelligence Research Institute (MIRI) technical report 8. https://www.semanticscholar.org/paper/Aligning-Superintelligence-with-Human-Interests:-A-Soares-Fallenstein/d8033a314493c8df3791912272ac4b58d3a7b8c2
Torrance, P. (2018). Guiding creative talent. Muriwai Books.
Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20, 27-40.
Framework 5
What’s the CATCH?
Contributors: Lucas Vasconcelos, Daisyane Barreto, Alison Egan, Jon Margerum-Leys, Sumreen Asim
The existing literature lacks a framework that incorporates the competencies involved in the use of genAI, as previous frameworks focus primarily on what genAI can do or how it can be used to create efficiencies at work. By adding a competency element to it, teacher educators can use this new framework to measure their own genAI skills. The new CATCH framework argues that any genAI framework must also include context, culture, criticality, and content, in addition to the human elements noted in previous literature. It can therefore be adopted — and adapted — by those involved in teacher education.
Introducing The CATCH Framework
The CATCH framework (Figure 5) is formed as a competency flow chart that enables teacher educators to assess and develop preservice teachers’ genAI-related knowledge and skills in relation to content, culture, criticality, and content. This framework, informed by research on teacher digital competence and psychological studies on self-assessment biases, such as the Dunning–Kruger effect (Kruger & Dunning,1999), acknowledges that educators may overestimate their abilities when integrating new technologies. Although the original research predates genAI, its insights into overconfidence remain relevant to understanding the ways educators assess their competence with tools, including today’s AI-powered technologies. By positioning competency development at the core, the CATCH framework encourages reflective practice and continuous growth as teacher educators implement and evaluate genAI-generated content within their classrooms and institutional contexts.
Figure 5
The CATCH Framework

Moreover, building on Freire’s critical pedagogy and culturally relevant teaching (Vasconcelos et al., 2025), the CATCH framework emphasizes the need for teacher educators to examine the power dynamics, cultural assumptions, and epistemological implications embedded in genAI use. Freire’s emphasis on dialogue, agency, and critical consciousness provides a foundation for questioning how genAI might reproduce or challenge dominant narratives (Freire, 1970, 2018), while culturally relevant teaching highlights the importance of ensuring that AI-mediated learning experiences remain responsive to the identities, lived realities, and cultural assets of diverse learners (Ladson-Billings, 1995). Through this lens, the CATCH framework supports teacher educators in critically examining their own practice and making intentional decisions about genAI integration.
The CATCH Framework’s Elements
The CATCH framework combines many competencies required by teacher educators and is not intended to enforce a linear or sequence-based approach to how its elements interact. One of these elements is the genAI created content (C = Content), informed by problem-posing approaches (Freire, 1970), and then evaluated through contextual (C = Context) understanding.
The genAI component (A = AI) represents the generative system that produces the content in response to human prompts, reflecting the interplay between human creativity and algorithmic generation. Another component framework is teacher education (T = Teacher Education), the intended audience for the framework. This evaluation draws on key competencies (C = Competencies) such as criticality, consciousness, dialogue (Freire, 1970, 2018), as well as culture relevance and language considerations (Ladson-Billings, 1995), to ensure the generated material is meaningful and appropriate.
The human element (H = Human) highlights the teacher educator (and preservice teachers who learn from them) who craft prompts (Liu et al., 2023; Zhang et al., 2021) and consider the ethical implications of AI use (Hua et al., 2024). If the generated content does not meet the intended goals, the human revises and resubmits prompts, continuing an iterative process of critical reflection and engagement with genAI. Through this ongoing interaction among Content, AI, Teacher Education, and Human elements, the CATCH framework fosters ethical, generative, critical, and contextually grounded uses of generative AI in education.
Use Cases
First, preservice teachers can use genAI to create reading materials about environmental sustainability for middle school students. They can analyze the content (C) through a contextual (C) lens, assessing whether examples reflect their local community’s experiences and values. In doing so, they consider how the genAI (A) system produced the text and how different prompts might shape alternative outputs. Guided by their teacher education (T) coursework, they engage in collaborative reflection to refine both prompts and results. Drawing on competencies (C) such as criticality and cultural awareness, they identify biases and inaccuracies in the AI-generated text, embodying the human (H) element of ethical and critical decision-making. The result is a cocreated, contextually meaningful lesson plan that integrates AI innovation with human judgment and pedagogical intent.
Second, teachers can engage in an interdisciplinary project to design a social justice unit that integrates history, literature, and digital storytelling. They can begin by using AI (A) tools to generate initial content (C) such as texts, discussion questions, and multimedia prompts, which they then evaluate for context (C), relevance, and cultural representation. If, during this evaluation, they find that the AI-generated materials are shallow or limited in addressing their learning goals, they can collaborate with their teacher education (T) community to identify and co-create richer resources that strengthen their unit’s critical and cultural dimensions. They can also discuss strategies for fostering critical dialogue and ethical AI use in classrooms. Finally, the human (H) element emerges as teachers iteratively refine prompts, assess AI outputs, and consider the emotional and ethical impact of content on students.
Recommendations for Future Research and Further Development
When the research team created the CATCH framework, they were aware it must be introduced to teacher educators to evaluate it. Therefore, the team will adopt a design-based, rapid prototyping approach to run a series of studies implemented in different regions, which will contribute to the refinement of the framework. The research team will devise a questionnaire that can be used by fellow researchers when implementing the framework. The research team will then conduct national and international pilot implementations of the framework with teacher educators. Statistical analysis, including factor analysis, will be used to investigate the relationships, and any correlations, between the different critical and cultural competencies in the framework.
References for Framework 5
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121
Freire, P. (1970). Pedagogy of the oppressed. Continuum.
Freire, P. (2018). The banking concept of education. In E. B. Hilty (Ed.), Thinking about schools (pp. 117-127). Routledge. http://puente2014.pbworks.com/w/file/fetch/87465079/freire_banking_concept.pdf
Hua, Y., Niu., S., Cai, J., Chilton, L. B., Heuer, H., & Wohn, D. Y. (2024). Generative AI in user-generated content. In F. F. Mueller (Ed.), Extended abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24). Article 471, 1–7. ACM. https://doi.org/10.1145/3613905.3636315
Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465-491. https://doi.org/10.3102/00028312032003465
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), 1–35. https://arxiv.org/abs/2107.13586
Vasconcelos, L., de Castro Filho, J. A., Barreto, D., Castro, J., de Fátima Souza, M., Cardoso, L., & Maia, D. (2025, March). Artificial intelligence literacy and STEAM education: A framework for EFL preservice teacher preparation. In Proceedings of the Society for Information Technology & Teacher Education International Conference (pp. 1526–1534). Association for the Advancement of Computing in Education. https://www.learntechlib.org/primary/p/225996/
Zhang, N., Li, L., Chen, X., Deng, S., Bi, Z., Tan, C., Huang, F., & Chen, H. (2021). Differentiable prompt makes pre-trained language models better few-shot learners. arXiv preprint, arXiv:2108.13161. https://doi.org/10.48550/arXiv.2108.13161
Synthesis and Conclusion
The frameworks described here represent ways scholars are developing their perspectives and articulating their concerns about the use of genAI in educational contexts. When first introduced, genAI concerns about misuse, cheating, and hallucinations were top of mind (Dwivedi et al., 2023). As demonstrated in these frameworks, concerns are now increasingly shifting toward the role genAI takes in learning and instruction. As evident in the these frameworks, scholars at NTLS grappled with a range of issues, including the following:
- Cognitive Deskilling: The Cognitive Deskilling Prevention Framework examines which learning tasks are appropriate to offload to genAI and which tasks should remain the responsibility of humans.
- Teaching and learning: The genAI Use in Teaching and Learning Matrixshifts the focus to genAI’s impact on teaching and learning, while considering its potential societal impacts.
- Educator agency: The SLIDE – Students Learning in Dialogue with Educational AIseeks to gauge the degree of AI involvement educators wish to have on tasks related to assessing student learning, particularly grading and feedback.
- Human-Centered interactions: The Human-Centered AI framework seeks to assess how AI interacts with humans, beyond grading or assessment, with the purpose of improving humanity.
- Ethical AI: The CATCH framework recenters the conversation on critical perspectives, working to conceptualize the ethical and power dynamics that shape AI in the context of teacher education.
Each framework is important on its own ground and while some build upon each other, they collectively reflect how scholars are thoughtfully advancing the conversation about genAI from what it does and how it functions, to what it ultimately means for the field of education and for our society at large. As facilitators of this NTLS strand, we see these frameworks as launching points. More study and research are needed for them to truly shape educational theory and practice. In closing, we encourage colleagues and scholars around the world to engage with these frameworks and their use cases, so they can enact and expand upon the recommendations for future research.
References
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
Ghimire, A., & Edwards, J. (2024). Generative AI adoption in classroom in context of technology acceptance model (TAM) and the innovation diffusion theory (IDT). arXiv preprint arXiv:2406.15360.
Hu, W., & Shao, Z. (2025). Design and evaluation of a genAI-based personalized educational content system tailored to personality traits and emotional responses for adaptive learning. Computers in Human Behavior Reports, 19, 100735.
Larson, B. Z., Moser, C., Caza, A., Muehlfeld, K., & Colombo, L. A. (2024). Critical thinking in the age of generative AI. Academy of Management Learning & Education, 23(3), 373-378.
Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.
Robert, J. (2025, Sep.). 2025 horizon action plan: Building skills and literacy for teaching with genAI. Educause. https://library.educause.edu/resources/2025/9/2025-educause-horizon-action-plan-building-skills-and-literacy-for-teaching-with-genAI
Appendix
Sample of the Reviewed AI Frameworks
| AI Governance and Risk Assessment | |
|---|---|
| SECURE - A genAI Use Framework for Staff | University of Newcastle. (2024). S.E.C.U.R.E. genAI Use Framework for Staff. https://secureframework.ai |
| Common Sense AI Principles Assessment | Common Sense Media. (2024). Common Sense AI Principles Assessment. https://www.commonsensemedia.org/ai-ratings/ai-risk-assessments |
| Rubric for Evaluating AI Tools: Fundamental Criteria | eCampusOntario. (2024). Rubric for Evaluating AI Tools: Fundamental Criteria. https://ecampusontario.pressbooks.pub/app/uploads/sites/3696/2024/02/Rubric-for-AI-Tool-Evaluation-Fundamental.pdf |
| Teaching with AI | |
| PAIRR Framework for Writing with AI | University Writing Program, UC Davis. (2025). Peer & AI Review + Reflection (PAIRR). https://writing.ucdavis.edu/pairr |
| VALUES | Watkins, M. (2025, August 21). VALUES Framework for Faculty Use of AI in Higher Education. Higher Education Needs Frameworks for How Faculty Use AI. https://marcwatkins.substack.com/p/higher-education-needs-frameworks |
| How Critically Can an AI Think? A Framework for Evaluating the Quality of Thinking of Generative Artificial Intelligence | Xu, Y., & Wu, X. (2024). How critically can an AI think? A framework for evaluating the quality of thinking of generative artificial intelligence. arXiv. https://arxiv.org/abs/2406.14769 |
| AI Literacy | |
| AI Literacy Framework | Hibbert, A., Hicks-Moldoff, M., Morse, S., & Riback, L. (2024, June). A framework for AI literacy. EDUCAUSE Review. https://er.educause.edu/articles/2024/6/a-framework-for-ai-literacy |
| Expanded AI Literacy Framework | Digital Promise. (2024). Expanded AI Literacy Framework. https://digitalpromise.org/initiative/artificial-intelligence-in-education/ai-literacy/ |
| ED-AI Lit | Kong, S.-C., & Ananiadou, K. (2024). ED-AI Lit: An interdisciplinary framework for AI literacy in education. Policy Insights from the Behavioral and Brain Sciences, 11(1), 3-10. https://journals.sagepub.com/doi/pdf/10.1177/23727322231220339 |
| AI Prompting | |
| CREST | AI Literacy Institute. (2025). CREST Prompt Framework. https://ailiteracy.institute/crest-prompt-framework/ |
| Five S’s Model | AI for Education. (2024). Prompt Framework for Students: The Five “S” Model. https://www.aiforeducation.io/ai-resources/the-five-s-model-students |
| CLEAR Path | Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. Journal of Academic Librarianship, 49(4), Article 102720. https://doi.org/10.1016/j.acalib.2023.102720 |
| AI Ethics | |
| AI Decision Tree | Cherner, T., Foulger, T. S., & Donnelly, M. (2025). Introducing a Generative AI Decision Tree for Higher Education: A synthesis of ethical considerations from published frameworks, guidelines. TechTrends. |
| Ethical Framework | Shen, Z., & Wang, S. (2025). Ethical framework for generative artificial intelligence (p. 7). arXiv. https://arxiv.org/pdf/2501.09021 |
| AI in K12 for Ethics | Kim, J., & Lim, C. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Discover Education, 2, Article 100120. https://www.sciencedirect.com/science/article/pii/S2666557324000120 |
![]()