Heafner, T., & Maxwell, D., (2025). CIVIC: Five pillars for using artificial intelligence in social studies education. Contemporary Issues in Technology and Teacher Education, 25(4). https://citejournal.org/volume-25/issue-4-25/social-studies/civic-five-pillars-for-using-artificial-intelligence-in-social-studies-education

CIVIC: Five Pillars for Using Artificial Intelligence in Social Studies Education

by Tina Heafner, University of North Carolina at Charlotte; & Daniel Maxwell, University of North Carolina at Charlotte

Abstract

As generative artificial intelligence (GenAI) becomes increasingly integrated into K-12 education, it holds significant potential to enhance social studies instruction through personalized learning, inquiry-based exploration, and interactive simulations. However, the responsible and ethical use of artificial intelligence (AI) and GenAI in social studies requires a clear framework that aligns with the discipline’s emphasis on critical thinking, inclusivity, and civic engagement. This article introduces CIVIC, a framework of five pillars for using AI in social studies education: encouraging human-AI cointelligent partnerships in learning, ensuring responsible, inclusive, and equitable AI use, promoting the critical evaluation of AI-generated content, enhancing inquiry-based learning, and preparing for the future in the era of GenAI. Drawing from current literature and research, the CIVIC framework offers practical strategies for educators to incorporate AI and GenAI effectively into their classrooms while addressing challenges related to bias, data privacy, and the ethical implications of AI in historical and civic contexts. By following these guidelines, educators can leverage AI to support student engagement and learning while preparing students for the complexities of an AI-driven society.

The recent emergence and intersection of generative artificial intelligence (GenAI) and K-12 social studies education offers a unique set of opportunities and challenges for educators and students alike. The integration of GenAI tools like OpenAI’s ChatGPT, Google’s Gemini, MagicSchool, NotebookLM, and Microsoft’s Copilot into the classroom will substantially impact how social studies is taught and learned by facilitating inquiry-based learning, fostering critical thinking, and democratizing access to diverse historical perspectives (Berson & Berson, 2024b; National Council for the Social Studies [NCSS], 2022). However, while these tools present exciting possibilities, their implementation also raises significant ethical and practical concerns, such as data privacy, historical accuracy, academic integrity, and the potential amplification of biases embedded within artificial intelligence (AI) systems (Haenlein & Kaplan, 2019; Mollick & Mollick, 2023; Sobaih et al., 2024).

The rapid growth of GenAI technology reflects a broader trend of digital integration in education, spurred by events such as the COVID-19 pandemic, which dramatically shifted the focus of learning environments from physical to virtual (Choate et al., 2021; NCSS, 2022). As noted in the position statement by NCSS, the pandemic acted as a catalyst for the adoption of digital tools, creating a “tele-everything” world in which online engagement became the norm for work, education, and social interaction (NCSS, 2022, Contextualization and Rationale for Recommendations section).

This digital transformation has sparked essential discussions about the role of technology in education, the commodification of personal data, and the civic implications of AI-generated content (Donovan & boyd, 2021). Yet, as Kranzberg (1986) wrote, “Technology is neither good nor bad; nor is it neutral” (p. 545). The influence of AI in social studies will not be determined by technology alone, but by how it is adopted, regulated, and critically examined within pedagogical and instructional contexts.

Given these changes, it is imperative that social studies educators not only remain informed on the development and implementation of AI and GenAI tools in social studies education, but also critically evaluate their potential impact on teaching and learning. As Mollick (2024) warned, “We cannot wait for decisions [about the use of GenAI] to be made for us, and the work [GenAI development] is advancing too fast to remain passive. We need to aim for eucatastrophe, lest our inaction makes catastrophe inevitable” (p. 201).

In other words, educators must take an active role in shaping how AI and GenAI are integrated into social studies classrooms rather than allowing external forces, such as policy makers or technology companies, to dictate their use without considering disciplinary, pedagogical, and ethical implications. By proactively engaging with AI, teachers can help steer its use toward curricular priorities that support democratic values, critical inquiry, and inclusive historical representation rather than reinforcing existing inequities.

Even more critical is the message that emphasizes technology in social studies can shape students’ understanding of historical narratives and civic engagement (Berson & Balyta, 2014) in a field where the interpretation of curriculum and content often reflects power dynamics and cultural biases (Stanley, 2024). Kranzberg’s (1986) insights reinforce this point, noting that the societal effects of any technology depend not only on its inherent capabilities but on the social, political, and institutional structures in which it is embedded.

As part of this broader technological ecosystem, social studies educators must not only strive to use AI responsibly but do so with a focus on promoting equitable access, maintaining historical accuracy, and fostering informed skepticism about AI-generated content. Centering human agency, social studies educators are uniquely positioned to help students navigate the complex landscape of an AI-powered society.

This article outlines CIVIC, a framework of five key pillars for the integration of AI and GenAI in social studies education, aiming to provide social studies educators with practical strategies for integrating AI in a way that enhances student learning while addressing ethical concerns. CIVIC also positions the purpose of social studies and its civic life preparation as fundamental to understanding and questioning how AI and GenAI will change the ways people live, learn, and work. Drawing from current research and policy recommendations from global organizations and governmental agencies, the CIVIC pillars emphasize the importance of critically engaging with AI tools, promoting digital citizenship, and preparing students for a world where AI influences information and interpretation.

Literature Review

The integration of GenAI in education necessitates a thorough examination of its ethical, pedagogical, and societal implications. Beginning with a brief overview of the definition of generative AI, this literature review explores the benefits and challenges of GenAI in social studies classrooms and highlights key considerations for educators in using AI responsibly and effectively.

Defining AI

The integration of AI into education is not a new concept, but the recent and rapid emergence of GenAI following OpenAI’s public release of ChatGPT in November 2022 led to a significant increase in public discourse and consideration of these tools and their utility in education spaces (Sier, 2022; Strzelecki & ElArabawy, 2024). The exceptional capacity of GenAI tools necessitates a clear distinction between traditional artificial intelligence and generative AI, especially regarding their use in social studies education. Although all GenAI is a form of AI, not all AI is GenAI, and it is important these terms are clearly defined and used appropriately in context (Hennessy et al., 2024).

Artificial intelligence, broadly described by the United States Department of Education (2023) as tools capable of making “automation based on associations” (p. 1) and more precisely defined by Baker and Smith (2019) as “computers which form cognitive tasks, usually associated with human minds, particularly with learning and problem-solving” (p. 10), has been leveraged in education spaces for decades. However, recently emergent GenAI tools like ChatGPT, Gemini, Copilot, MagicSchool, NotebookLM, and others, utilize a more complex deep learning process, named after the increased depth of layers a neural network can process, which enables these tools to quickly and effectively recall, learn from, and improve future outputs to create a highly capable and user-friendly tool (Haque et al., 2022; Hardesty, 2017; Rahimi & Talebi Bezmin Abadi, 2023).

Because the algorithms that drive the deep learning process are proprietary, the exact process of deep learning leveraged by each GenAI tool are often unknown (Haenlein & Kaplan, 2019; Mollick, 2024). However, it is generally known how these processes work across a variety of tools. GenAI tools, when prompted by a user, utilize the deep learning process to efficiently recognize patterns between the prompt and the expansive datasets upon which the tool is trained, enabling the tool to generate an output as a response to the prompt (Hays et al., 2024). Depending on the tool used, these prompts and AI-generated responses can be multimodal, including text, file, audio, or visual inputs and outputs. It is the remarkable capability and quality of these new GenAI tools to complete a seemingly endless array of tasks on behalf of a human user that have renewed the focus on artificial intelligence and its utility in education.

Enhancing Inquiry-Based Learning

Generative AI holds promise for enhancing inquiry-based learning in social studies. AI systems, particularly those powered by large language models (LLMs), can support students in formulating questions, analyzing complex historical events, and even simulating conversations with historical figures (Berson & Berson, 2024b; Imran et al., 2023). For instance, tools like ChatGPT can enable students to engage in real-time dialogs with AI-simulated historical figures, making abstract or distant events more tangible and relatable (Mollick & Mollick, 2023). This interactive approach not only fosters deeper engagement with the subject matter but also helps students develop a more nuanced understanding of history by providing them with access to multiple perspectives.

AI tools also have the potential to enhance access to primary sources and visual content, allowing students to explore historical documents, artifacts, and images that might otherwise be inaccessible. The Newspaper Navigator project, for example, demonstrates how AI can facilitate the exploration of digitized historical collections, enabling students to engage with visual and textual content beyond traditional textbooks (Lee et al., 2021). Such tools provide students with personalized and interactive learning experiences, aligning with the goals of social studies education, which emphasizes the importance of inquiry, critical thinking, and civic engagement (NCSS, 2023).

Supporting Differentiated Instruction and Teacher Efficiency

AI technologies might also play a key role in supporting differentiated instruction and enhancing teacher efficiency (The Institute for Ethical AI in Education [IEAIED], 2021; United Nations Educational, Scientific and Cultural Organization [UNESCO], 2023). AI may help automate some routine tasks, potentially saving time that can otherwise be spent focusing more on direct interactions with students, but educators must also consider the time needed to vet AI-generated materials for quality and alignment with learning goals (Karpouzis et al., 2024; Maxwell, 2023; UNESCO, 2023). Furthermore, GenAI’s ability to provide tailored feedback to students based on their specific needs make it a potentially powerful technology for differentiated instruction, particularly for linguistically diverse learners, multilingual learners (MLs) and students with disabilities (Kasneci et al, 2023; UNESCO, 2023). Subaveerapandiyan et al. (2023) highlighted how GenAI’s integration with adaptive technologies like text-to-speech systems and translation services can support students who require additional help in understanding complex historical concepts.

Karpouzis et al. (2024) noted that GenAI-created plans are both effective and save a significant portion of time in the planning process, freeing up time for teachers to focus on higher order tasks, like facilitating discussions and providing individualized feedback (Baidoo-Anu & Owusu Ansah, 2023; Mishra et al., 2024; Rudolph et al., 2024). This efficiency is particularly valuable in the context of social studies, where educators must often balance the need to cover a broad range of disciplinary topics in civics, history, geography, and economics with the goal of promoting deep, critical engagement with the material (Berson & Berson, 2024[a or b?]).

Fostering Critical Thinking and Civic Engagement

One of the most significant contributions of AI to social studies education is its potential, when used responsibly, to foster critical thinking and civic engagement. AI tools create opportunities for students to question and interpret the information they encounter, helping them develop critical thinking skills including evaluating evidence and analyzing context (Darwin et al., 2023). In social studies, where understanding multiple perspectives is key to developing informed citizens, AI can expose students to diverse and accessible narratives that are often excluded from traditional textbooks (Berson & Berson, 2024b). GenAI’s ability to generate content at scale creates the opportunity for students to analyze and critique different viewpoints, promoting the development of higher order thinking skills.

GenAI can also enhance civic education by helping students engage with real-world issues in new ways. GenAI simulations, for example, can model democratic processes or simulate civic participation, providing students with firsthand experiences that deepen their understanding of governance, law, and civic responsibility (Heafner & Ziv, 2024b; Talan, 2021). By integrating AI into student learning experiences, educators can help students connect complex historical topics to contemporary social issues, potentially fostering a more active and informed participation in democratic life (Williams et al., 2024).

Challenges and Ethical Concerns

Despite its benefits, the use of AI in social studies classrooms raises significant ethical concerns, particularly regarding bias and historical accuracy. AI-generated content is not immune to bias, as generative AI systems are trained on large datasets that often reflect societal inequalities and dominant cultural narratives (Bender et al., 2021; Darics & Poppel, 2023; Haenlein & Kaplan, 2019; Mollick, 2024). This can result in the perpetuation of exclusionary or distorted historical narratives, which is especially problematic in social studies education, where the accurate representation of history is critical.

Lee et al. (2021) warned that AI systems may reinforce historical stereotypes or marginalize underrepresented voices, leading to a skewed understanding of history. As Clark and van Kessel (2025) argued, AI tools in social studies must be approached through a lens of informed skepticism, one that interrogates the political, racial, and epistemological assumptions embedded in their design and use. To address this issue, educators must teach students to critically evaluate AI-generated content, guiding them to cross-check AI outputs with primary sources and more reliable historical evidence (Berson & Berson, 2024b). This aligns with the broader goals of social studies education, which emphasize the importance of fostering digital citizenship and preparing students for civic engagement in an increasingly AI-driven world (NCSS, 2022). Furthermore, a justice-centered vision of social studies education calls for critical engagement with technology, where students question whose knowledge is represented and whose is omitted (Clark & van Kessel, 2025).

Another major concern in the use of AI in education is data privacy (IEAIED, 2021; UNESCO, 2023; World Economic Forum, 2019). AI tools often require access to student data in order to personalize learning experiences, raising questions about how that data are collected, stored, and used (Zorz, 2023). Bahroun et al. (2023) highlight the importance of ensuring that AI systems comply with data privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) and the General Data Protection Regulation (GDPR), to protect students’ personal information. The U.S. Department of Education emphasizes the need for transparency and strict adherence to data privacy laws, such as FERPA, when integrating AI into classrooms (U.S. Department of Education, Office of Educational Technology, 2023). Educators must ensure that AI tools comply with these regulations and critically consider the terms and conditions of the tools they intend to use to protect student data. In addition to privacy concerns, generative AI presents challenges related to academic integrity. The ability of GenAI tools to autonomously generate content raises questions about plagiarism and cheating, as students can use these tools to complete assignments without contributing original work (Mollick & Mollick, 2023; Rahimi & Talebi Bezmin Abadi, 2023). Moreover, GenAI presents an illusion of understanding that seems authoritatively correct while often misleading or misinforming novice users. GenAI’s purpose is to respond pleasingly to users’ prompts without judgement or a shared ethical standard (Mollick, 2024). Consequently, educators need clear guidelines on the responsible use of AI and implement strategies to distinguish between legitimate AI-assisted learning and academic dishonesty (Tlili et al., 2023). To avoid ill-informed use of GenAI, guidance must also distinguish what these tools can and cannot do.

The issue of equity is also central to discussions about AI in education. While AI has the potential to democratize access to knowledge, it can also exacerbate existing inequalities if not implemented thoughtfully (UNESCO, 2023). Students from under-resourced schools or low-wealth communities may lack access to the technology needed to engage with AI tools, expanding a digital divide that limits their ability to benefit from AI-enhanced learning experiences (Berson & Berson, 2024a; Wiliam, 2017). Schools and policymakers must prioritize investments in technology infrastructure to ensure that all students have equal access to AI tools, regardless of their socioeconomic background. GenAI systems should strive to avoid past digital divide pitfalls and be designed with inclusivity in mind. The Ethical Framework for AI in Education (2020) emphasizes that AI systems should promote equity by being accessible to all learners, including those with disabilities or from historically marginalized communities. Educators must critically assess the AI tools they choose to implement in their classrooms, ensuring that these technologies enhance learning opportunities for all students rather than reinforcing existing disparities.

Considering these challenges, promoting AI literacy among students is essential. AI literacy involves understanding how AI systems work, recognizing their limitations, and using AI tools ethically (Chan & Zhou, 2023; Stöhr et al., 2024) in addition to teaching students to not only be the human in the loop (Mollick, 2024) but also the human leading the loop. Zeide (2017) stresses the importance of teaching students to critically engage with AI-generated content, encouraging them to question the validity of AI outputs and cross-reference them with credible sources. In the context of social studies, AI literacy is particularly important for fostering informed civic engagement. As AI, and especially GenAI, continues to shape public discourse and decision-making processes, students must be equipped with the skills to navigate an increasingly AI-driven world (Donovan & boyd, 2021). By integrating AI literacy into social studies curricula, educators can help students develop a critical understanding of how AI systems influence historical narratives, media, and public opinion (Berson & Berson, 2024b; Chan & Zhou, 2023; Stöhr et al., 2024). This approach aligns with the goals of social studies education, which seek to prepare students to engage as informed citizens in democratic societies (NCSS, 2023).

While ongoing research and collaboration among educators, technologists, and policymakers will be crucial, we begin this dialogue by offering guidelines for the responsible use of generative AI in K-12 social studies education. Our framework builds on recent scholarship that critiques the uncritical adoption of AI in education and calls for ethically grounded, justice-oriented approaches to technology integration in the social studies classroom (Clark & van Kessel, 2025). By embedding these concerns across five pillars, the framework resists techno-optimism and instead promotes the development of socially conscious, future-ready citizens.

CIVIC: Five Pillars for AI Use in K-12 Social Studies Education

The integration of artificial intelligence (AI) into social studies education presents uniqueopportunities to enhance inquiry-based learning, critical thinking, and civic engagement. However, to ensure that these tools are used responsibly and effectively, educators must adhere to clear, research-informed principles that align with the pedagogical goals of social studies and uphold ethical standards. CIVIC (Table 1) presents a set of five research-based pillars that are designed to support educators in incorporating AI and GenAI into social studies classrooms in a way that promotes critical evaluation, inclusivity, ethical use, inquiry-driven exploration, and AI literacy. The principles outlined by the CIVIC Framework are tailored for any educator with an interest in social studies pedagogy, from classroom teachers and administrators to curriculum specialists, district, and state-level leaders.

The CIVIC Framework is built upon these five pillars, each of which draws upon current literature, research, and policy guidelines from global governmental agencies, providing practical strategies for fostering an educational environment where AI enriches learning without compromising the integrity of historical inquiry or student agency. It is important to note that these five pillars are not mutually exclusive, but rather, work closely together as a unit to uphold best practice in social studies education. Pillars, instead of other terminology like guidelines or rules, were intentionally selected as the name for these five principles as each pillar is needed alongside the others to support a holistic, responsible integration of AI in social studies. Just as removing a single pillar from a physical structure would weaken its structural integrity, removing or failure to appropriately consider one of these pillars in the context of the others will also weaken the integrity of social studies pedagogy when leveraging AI tools. As such, each pillar presented in this framework features underlying concepts interleaved with the other pillars, but each pillar is also designed to serve a specific purpose within social studies education.

Table 1
CIVIC Pillars for Artificial Intelligence in Social Studies Education

Pillar #Title
Pillar 1(C) Consider AI as a Co-Intelligent Partner for Learning
1. Personalized Learning
2. Student Agency
3. Aligning AI with Curriculum Objectives
Pillar 2(I) Integrate AI in Ethical and Equitable Ways
1. Ethical Considerations
2. Equitable Access to AI Tools
3. Addressing Challenges: Integrate AI Gradually
Pillar 3(V) Verify Information and Outputs Consistently
1. Foster Awareness of AI’s Potential for Biased or Inaccurate Output
2. Invite Students to Investigate Examples of Bias
3. Incorporate Ethical Discussions into the Curriculum
Pillar 4(I) Informed Inquiry: Using AI to Explore Complex Social Studies Questions
1. Generate Multiperspectivity
2. Simulate Civic Processes
3. Connect Historical Thinking to Current Events
Pillar 5(C) Cultivate Future-Ready Citizens: Building AI Literacy for Civic Life
1. Digital Citizenship
2. Fostering AI Literacy
3. Integrate AI-specific Professional Learning Experiences
4. Monitor and Evaluate AI’s Impact on Learning Outcomes

Pillar 1. Consider AI as a Co-Intelligent Partner for Learning

The integration of artificial intelligence into social studies education presents an opportunity to expand how students engage in inquiry, critical thinking, and disciplinary reasoning. In this context, AI should not be treated merely as a tool for information retrieval or productivity, but as a co-intelligent partner in learning. This partnership, when thoughtfully constructed, invites students to think with AI, not in substitution for their own cognitive, ethical, and civic processes, but in metacognitive dialogue with these learning processes. Co-intelligence is the idea that the most powerful and beneficial outcomes come not from humans working alone or AI working alone, but from a partnership where human intelligence and artificial intelligence combine their strengths. Rather than seeing AI as a replacement for human thinking, co-intelligence frames AI as a collaborator or partner that augments, extends, and enhances what people can do. Co-intelligence is a new literacy whereby the human role becomes even more valuable in areas where curation, judgement, ethics, creativity, and understanding context are needed—all actions deeply connected to social studies learning.

Hence, we define co-intelligence as a purposeful and dialogic partnership between human learners and GenAI, where AI systems support the development of students’ disciplinary thinking by extending their questioning, modeling alternative perspectives, and scaffolding reflection, without replacing the essential human dimensions of learning such as empathy, ethical judgment, or civic responsibility. Co-intelligence encourages shared inquiry and preserves the learner’s central agency in meaning-making. Thus, co-intelligence in social studies is not an automatic outcome of introducing AI; rather, it is an intentional practice in which students and educators enter into an ongoing, critical partnership with AI. Co-intelligence emerges only when educators design experiences that require students to bring their own reasoning, skepticism, and civic judgment to every interaction with AI. In other words, co-intelligence is an educational aim that must be intentionally and carefully cultivated in the classroom.

The notion of co-intelligence originates from Ethan Mollick’s (2024) book titled, Co-intelligence: Living and working with AI. Although he broadly describes co-intelligence as inviting AI “to the table,” to help us reflect and make better decisions, we also acknowledge the tensions this creates, particularly when AI is framed as a tool to “improve (or replace) our work” (p. xx). In social studies education, this framing deserves interrogation. While some cognitive labor can be offloaded to AI, our perspective centers on maintaining intellectual ownership and critical distance from the machine. Not every task warrants AI collaboration, especially when it risks collapsing inquiry into mere answer-seeking. As Mollick (2024) cautions, “We aren’t just learning AI’s strengths as we figure out the shape of the Jagged Frontier. We are scouting out its weaknesses” (p. 50). This means students and teachers must remain alert to the boundaries and limitations of AI as much as its affordances. Co-intelligence demands discernment.

Recent research highlights why this discernment is so critical. Gerlich (2025) found that frequent, unguided use of AI tools is associated with lower critical thinking skills among young people, due to increased cognitive offloading, when students allow AI to do the mental work for them. This risk is particularly acute for younger students and underscores that co-intelligence will not develop without intentional scaffolding. As Gerlich (2025) demonstrates, without explicit guardrails and structure, students may use AI to shortcut their thinking, diminishing the very critical reasoning skills that social studies aims to build.

By encouraging students to leverage AI tools as a co-intelligent partner in their learning process, teachers can promote greater agency and critical thinking as well as invite technoskepticism, a deliberate stance that resists uncritical optimism about AI’s role in education (Clark & van Kessel, 2025), transforming the social studies classroom into a space of critical inquiry and metacognitive reflection. As Clark and van Kessel (2025) argue, fostering technoskepticism in this context invites a deeper interrogation of AI’s limits in supporting democratic and historical learning.

But what does thoughtfully constructed and technoskeptical co-intelligence actually look like in a social studies classroom? In practice, a co-intelligent, technoskeptical approach means that teachers provide structured prompts and model how to interrogate AI-generated responses, teaching students to ask: What evidence supports this? What perspectives might be missing? How does this compare to primary or scholarly sources? AI fact-checks become essential classroom activities, where students must validate, challenge, or expand upon AI-generated content with their own research and other credible sources. Assignments require students to reflect on when and how AI contributed to their learning, and to identify what parts of the thinking process should remain distinctly human. When AI is used, they reflect upon what is gained and at what cost, e.g., human tradeoff within the learning process. For younger students (K–8), educators place clear limits on AI use, such as restricting it to brainstorming or generating inquiry questions, not composing full answers. Students always take the final step in synthesis or analysis themselves. Students are taught to annotate or mark which portions of their work were AI-generated versus created independently, increasing their meta-awareness of their own cognitive processes. This structure ensures that AI is not simply providing answers, but instead becomes a catalyst for deeper questioning, comparison, and independent evaluation.

Co-intelligence is not about trusting AI but questioning and critically engaging with it. Dialoguing with AI does not require trust in its accuracy or neutrality; rather, it requires a stance of informed skepticism, treating every AI output as a provisional claim to be evaluated, tested, and, if necessary, corrected. Helping students seek AI as a fallible partner centers reasoning as a human asset. Educators must carefully navigate the technical and ethical aspects of AI use, ensuring that students engage with the technology responsibly and through a human-centered approach (Adams et al., 2023; United Nations Educational, Scientific and Cultural Organization, 2023). A recent meta-analysis by Wang and Fan (2025) reported that GenAI use, specifically ChatGPT, can improve learning performance, perception, and higher order thinking amongst students, but only when intentionally integrated into the learning experience for students. These findings are supported by other studies across diverse content and developmental level contexts that indicate potential positive learning outcomes for students utilizing GenAI in carefully scaffolded ways (Essel et al., 2024; Guo and Lee, 2023; Li, 2024; Suriano et al., 2025). Still, it remains prudent to interpret these more positive potential outcomes in light of other research indicating more problematic habits of students when using AI, like the findings of Krupp et al. (2024) who reported many students simply accept inaccurate answers without taking the time to critically consider the quality of information provided by AI tools, or Kosar et al. (2024) who reported no statistically significant difference in performance between students who did or did not utilize ChatGPT. As AI becomes more ubiquitous in the ways we live, learn, and work, inviting AI into teaching and learning can help us understand what tasks can and cannot be effectively replaced, augmented, and improved with AI. Amidst this process, it is important for teachers and students to be the human in the loop actively shaping when, how, and whether AI is brought into the learning process. When students are positioned as passive recipients rather than active participants in the learning process, the risk increases that AI will shortcut rather than support meaningful learning.

Ultimately, co-intelligence in social studies education is achieved when teachers and students co-create a culture of inquiry, skepticism, and ethical responsibility around AI use. By structuring AI interactions to demand critical engagement, social studies classrooms can ensure that AI is a catalyst for deeper thinking and democratic citizenship, not a substitute for creativity, empathy or human judgment.

Personalized Learning

A co-intelligent approach to personalized learning must remain rooted in culturally responsive pedagogy, which seeks to affirm students’ cultural identities while providing them with the tools to critically practice multiperspectivity, the process of increasing perspective diversity and cultural pluralism. AI may support this work by surfacing learning pathways that connect students’ identities, cultural backgrounds, and interests to social studies content (Berson & Berson, 2024b; Mollick, 2024). For example, students could use AI to explore historical events that are related to their heritage or cultural traditions, allowing them to see themselves reflected in the curriculum. However, educators must remain vigilant. AI systems may lack the contextual understanding needed to fully acknowledge and represent the nuances of culture and identity (Haenlein & Kaplan, 2019) because they are prone to oversimplifications and often reflect inherent data-training bias. Thus, a co-intelligent partnership demands critique as well as generation. Educators should pair AI-generated content with culturally rich primary sources, such as oral histories, autobiographies, or multimedia projects, and encourage students to critically analyze how AI systems interpret cultural and historical events. This practice not only promotes inclusivity but also fosters critical thinking, helping students develop a deeper understanding of how history is shaped by culture, identity, and power as well as technology. This practice aligns with the goals of social studies education, which seeks to develop informed citizens capable of analyzing history through multiple lenses (NCSS, 2013, 2023).

Generative AI can play a supportive role in student-led inquiry, particularly in the initial stages of research. For example, students exploring the impact of industrialization on urban life might use AI to generate a set of research questions, such as “How did industrialization contribute to the growth of cities in the 19th century?” or “What were the social and environmental consequences of rapid urbanization during the Industrial Revolution?”,  or identify relevant topics for further exploration (Subaveerapandiyan et al., 2023). AI-powered tools can then suggest sources or outline options, helping students organize their thinking and structure their investigation (Imran et al., 2023; Subaveerapandiyan et al., 2023). This scaffolding allows students to focus on analyzing and interpreting historical evidence rather than getting overwhelmed by the organizational aspects of research and writing. By providing real-time feedback on students’ work, AI tools can guide students through the inquiry process, helping them refine their research questions, improve their writing, and develop stronger arguments. Yet, in a co-intelligent partnership, the student must remain the primary investigator. AI may suggest a thesis statement or argument focus, but it cannot interpret historical causality or contextualize civic issues within a student’s lived reality. Educators must teach students to interrogate AI-generated content, compare it against scholarly sources, and revise claims accordingly. Like Mollick (2024), we contend AI will dramatically reduce the distance between novice and expert, but only when students engage actively in the intellectual work, not when they outsource it.

AI-powered simulations offer new opportunities for historical empathy and immersive learning. When designed and facilitated ethically, such experiences can deepen understanding. For instance, students might interact with AI-generated historical scenarios or debates, simulating events like a constitutional convention or a civil rights protest. But educators must tread carefully. Simulating sensitive events demands contextual framing and human-centered pedagogy which carefully considers emotional and traumatic impact on students (Coburn et al., 2022). AI cannot replicate the moral gravity of injustice or the emotional labor of civic struggle. As Clark and van Kessel (2025) warn, AI simulations can trivialize trauma or erase marginalized perspectives if not grounded in critical pedagogy. Thus, students’ learning should be grounded in primary sources before engaging with AI generated simulations and provided with opportunities to process these through guided reflective dialogue which asks students to consider not only what happened but why these events matter and how they are remembered differently across communities. Providing a foundation for understanding the significance and human-centeredness of the events AI tools simulate encourages respectful and ethical engagement with the material (Berson & Berson, 2024a). In co-intelligent partnering, technology amplifies engagement, but human experience anchors meaning in the learning process.

Student Agency

Co-intelligence begins with student agency: the ability of learners to decide whether, how, and to what extent they engage with AI in their educational experiences. Recent research demonstrates that unguided or frequent use of AI tools can result in cognitive offloading, reducing student engagement and weakening ownership of learning (Gerlich, 2025; Kos’myna et al., 2024). For example, Kos’myna et al. (2024) found that students who wrote essays independently before consulting AI exhibited greater neural engagement and memory recall than those who relied on AI from the outset. These findings reinforce the importance of structured, sequenced integration—where students engage deeply with disciplinary questions before AI is introduced as a critical partner (Gerlich, 2025; Bastani et al., 2024).

Responsible integration of AI as a partner in learning demands that students are not passive recipients of technology, but active participants who exercise choice and autonomy in its use (IEAIED, 2021; UNESCO, 2023). Importantly, agency is not a one-time decision but an ongoing process of critical engagement, where students must continually decide how, when, to what extent, and why to bring AI into their learning — leading the partnership rather than simply following AI’s suggestions (Mollick, 2024).

A foundational element of this agency is informed consent. Students should be fully aware of how AI tools operate, what types of data are collected, and how those data will be used (Bahroun et al., 2023; Heafner & Ziv, 2024a; UNESCO, 2023). Educators must foster open, transparent dialog about AI use in the classroom, creating space for students, parents, and guardians to ask questions and express concerns (Maxwell, 2023; World Economic Forum, 2019).

Student agency also means learning to use AI as a tool for thought, not just for answers. Teachers should provide scaffolded opportunities for students to annotate which portions of their work were influenced by AI, reflect on what cognitive work they completed independently, and consider when using AI added value or risked shortcutting their learning (Gerlich, 2025). Agency is developed when students are expected to justify their decisions about using, or deliberately not using, AI in a given task, and can articulate the benefits and limitations of AI support for their learning.

To ensure meaningful agency, students should have opportunities to opt in or out of AI use, select preferred tools when available, and determine how AI aligns with their personal learning processes and goals. No student should be required to use a specific AI tool, and those who choose not to engage with AI must be provided with alternative pathways that uphold the same instructional objectives and learning value.

As Clark and van Kessel (2025) argued, empowering students to question AI, for example, its limits, assumptions, and cultural blind spots, is a foundational act of civic learning. This is not just digital literacy; it is democratic literacy. Student agency reflects the broader mission of social studies education: to cultivate informed, ethical, and empowered decision-makers (NCSS, 2022). By centering student choice and collaborative discourse, educators not only uphold equity and consent but also prepare students to navigate the complex civic and technological choices of the age of GenAI.

Align AI with Curriculum Objectives

For AI to function as a co-intelligent partner in social studies classrooms, its use must be deliberately aligned with curricular goals and disciplinary learning outcomes. AI should not be treated as an add-on or novelty; rather, it must serve to strengthen instructional strategies grounded in inquiry, analysis, and civic engagement. The importance of teacher-guided guardrails cannot be overstated. Studies show that when students are left to use AI tools without structure, their learning outcomes are diminished (Bastani et al., 2024; Kestin et al., 2025). Guardrails, such as requiring students to draft their own arguments before using AI for critique, annotating AI versus human contributions, and engaging in structured reflection, are essential for developing agency and criticality. This approach aligns with responsible AI literacy frameworks that emphasize cognitive, affective, and behavioral competencies for ethical and informed use (Ma et al., 2025).

This deliberate alignment is essential to ensure that technology is a means to deeper learning, not an end in itself (Mollick, 2024) or a shortcut for thinking (Gerlich, 2025). As Bahroun et al. (2023) argued, generative AI tools have the potential to extend instructional capacities by creating adaptive learning environments that respond to individual student needs while supporting teachers in the design of differentiated tasks and formative assessments. Similarly, social studies researchers (Berson & Berson, 2024b; Heafner and Ziv, 2024a) have emphasized that effective AI integration occurs when its use is grounded in clear pedagogical intentions and connected to inquiry-based instructional design.

Teachers play a central role in determining when and how AI should be used in the learning process. For example, a teacher might use AI to help students brainstorm research questions or generate multiple perspectives on a historical event but require students to conduct deeper analysis and synthesis independently. In formative assessment, AI might be used to provide quick feedback or suggest revision strategies, but final reflections or arguments must be crafted by students themselves.

When used thoughtfully, AI can deepen students’ engagement with disciplinary questions, such as examining historical causality, interpreting primary sources, or exploring civic dilemmas, without diminishing the vital role of the student as a thinker and questioner. In assessment contexts, Tlili et al. (2023) noted that AI can be leveraged to develop low-stakes, formative assessments, such as autogenerated quizzes, self-check reflections, or writing prompts, which offer immediate, personalized feedback. These tools can enhance metacognitive awareness, support self-directed learning, and reinforce content understanding while maintaining alignment with broader curricular goals. It is equally important for teachers to identify when AI use is not appropriate, such as when the cognitive demand of the task is central to disciplinary understanding, or when overreliance might reduce student agency or analytical skill.

The integration of AI must be intentional, ethical, and anchored in social studies learning objectives. Uncritical or misaligned use of AI risks undermining the goals of social studies by encouraging cognitive offloading and superficial engagement. Recent large-scale studies demonstrate that AI tutoring, when scaffolded by teachers and aligned with curricular goals, can outperform even traditional or active learning strategies in supporting student growth (De Simone et al., 2025; Kestin et al., 2025). These gains are only realized, however, when AI is integrated as a structured, ethical, and intentional partner in learning (Mollick, 2025).

Responsible AI literacy is not merely technical, but ethical and civic in nature (Ma et al., 2025). Educators should ensure that AI integration develops students’ capacity to reflect on, critique, and take responsibility for their use of AI, attending to issues like bias, transparency, privacy, and fairness (Ma et al., 2025; Clark & van Kessel, 2025). Co-intelligence challenges us to ask when AI meaningfully advances the goals of social studies and when it might impede them. When aligned appropriately, AI becomes a powerful co-intelligent partner that supports human-centered and future-oriented civic learning.

Pillar 2. Integrate AI in Ethical and Equitable Ways

Responsible use of AI tools is critical in education as AI can serve as both an equalizer and an amplifier of existing inequities (Adams et al., 2023). Yet, there is no reason why AI should share our world view, ethics, or morality (Mollick, 2024). Social studies is strategically positioned to bring these limitations to light while also fostering generations of critically engaged responsible AI users who navigate AI with awareness, integrity, and equality.

Ethical Considerations

The ethical considerations surrounding the use of AI in social studies classrooms are multifaceted, encompassing issues related to data privacy, academic integrity, and the responsible use of AI-generated content. As AI becomes more prevalent in educational settings, educators must ensure that students understand the ethical implications of using AI tools and that schools adhere to legal and ethical standards for data protection. Social studies education, which emphasizes civic responsibility and ethical reasoning, provides a particularly relevant context for teaching students how to use AI in a way that upholds these values.

Any conversation about the ethical use of AI tools must begin with the potential environmental impact of the use of these tools. Simply put, the potential environmental cost of the use of increasingly powerful GenAI tools calls into question whether any use of these technologies can be considered ethical or responsible. Although studies have reported on electricity consumption of the digital sector (de Vries, 2023; Ligozat et al., 2022), the novelty of GenAI tools and their utilization of specialized computing resources necessitates a deeper look into the environmental impact of these technologies.

At present, there are many unknowns about the environmental impact and sustainability of GenAI technologies, but preliminary research indicates the use of GenAI, as with other digital services, has a significant impact on energy production and mining in addition to carbon emissions when considering the physical equipment, web servers, complex networks, and other resources necessary to operate these tools (Berthelot et al., 2024). Questions about the environmental impact of these technologies must be weighed alongside their potential benefits for us to more accurately depict the cost of AI use. For example, while GenAI tools may be useful for improved climate modeling (Larosa et al., 2023), we must also consider the potential environmental cost of using the tool for precisely that purpose (van der Ven et al., 2023).

Another pressing ethical concern regarding AI in education is data privacy and protection (UNESCO, 2023). AI systems often require access to personal information, such as student performance data, to provide personalized learning experiences (Bahroun et al., 2023). The ability to tailor learning experiences that adapt to individual student needs is particularly valuable in social studies, where students may require varied levels of support depending on their background knowledge and skill levels (Luckin et al., 2016). While this can enhance learning outcomes, it also raises significant concerns about how student data are collected, stored, and used. Educators and school administrators must ensure and trust that AI tools comply with data protection laws, such as the Family Educational Rights and Privacy Act (FERPA) in the United States and the General Data Protection Regulation (GDPR) in Europe (Shneiderman, 2020; Williams et al., 2024).

Transparency is key to ethical data use. Schools should inform students and their families about how AI tools collect and process data, and they must obtain informed consent before using these technologies in the classroom. Educators should also be vigilant about the terms and conditions of the tools they utilize, including the security measures in place to protect student data from breaches or unauthorized access. This is particularly important in an era where cyberattacks on educational institutions are becoming more common, and educators must ensure that AI systems that store sensitive information must use encryption and other safeguards to protect student privacy (Rahimi & Talebi Bezmin Abadi, 2023).

The use of GenAI in education has also sparked significant debate over academic integrity. AI tools like ChatGPT can generate essays, summaries, and even research papers with minimal input from users, which presents a challenge for educators seeking to assess authentic student work (Haque et al., 2022; Mollick & Mollick, 2023). While AI can be a valuable tool for enhancing learning, students can also misuse it to complete assignments without demonstrating their own understanding of the material (Tlili et al., 2023). This blurs the line between AI-assisted learning and questions of academic integrity. To address this issue, educators must establish clear guidelines on the appropriate use of AI in assignments and projects.

For example, teachers can specify which tasks can be completed with AI assistance (such as brainstorming or generating initial ideas) and which tasks must be completed independently (such as final writing and analysis). Educators should emphasize the importance of academic integrity and explain that while AI can support learning, it should not replace the critical thinking and creativity required for academic success (Cooper, 2023; Kasneci et al., 2023).

In addition to setting expectations, schools have the option to implement AI detection tools that identify when AI has been used to generate content. However, these tools are not foolproof, and numerous studies point to the challenges of reliable detection of AI-generated content (Khalil & Er, 2023; Parker et al., 2024; Tlili et al., 2023). Educators are therefore left to rely on their knowledge of students’ writing styles and capabilities to identify and address potential misuse. For instance, a sudden shift in writing quality or style may indicate that a student has used AI to complete an assignment (Rahimi & Talebi Bezmin Abadi, 2023). By fostering a culture of academic integrity, educators can encourage students to use AI responsibly and ethically.

Equitable Access to AI Tools

Equitable access to AI tools is another critical component of promoting inclusivity in social studies education (UNESCO, 2023). The digital divide, disparities in access to technology based on socioeconomic status, remains a significant barrier for many students, particularly those from low-wealth families or under-resourced schools (Wiliam, 2017). While AI can enhance learning experiences, students who lack access to the necessary devices or reliable internet connections may be unable to fully participate in AI-enhanced activities. To address this issue, schools and policymakers must prioritize investments in educational technology infrastructure to ensure that all students, regardless of their socioeconomic background, have access to AI tools (Bahroun, et al., 2023; Heafner & Ziv, 2024a). Providing school-based access to AI resources, such as computer labs or loaner devices, can help bridge the digital divide and create a more equitable learning environment. Moreover, educators can design AI-based activities conducted in the classroom to accommodate students with limited access to technology at home, ensuring that all students can engage meaningfully with the material.

AI tools can also play a pivotal role in supporting students with disabilities, making social studies content more accessible to a wider range of learners. Generative AI systems can be integrated with assistive technologies, such as text-to-speech software or real-time translations, to support students with visual or hearing impairments, as well as MLs (Subaveerapandiyan et al., 2023). For example, an AI-powered text-to-speech tool could read aloud primary source documents or historical texts, making them accessible to students with reading difficulties as well as reading or translating the sources in students’ native languages.

Additionally, AI can help educators create differentiated lessons that meet the unique needs and learning preferences of students (UNESCO, 2023). By generating multiple explanations of a single historical concept, GenAI tools create the opportunity for students to access the content in a way that best suits their learning preferences and culturally dynamic interests, promoting a more inclusive, supportive classroom environment. Educators should leverage these adaptive technologies to create equitable opportunities for all students to engage in social studies content, ensuring that no student is inadvertently excluded from the learning process.

Addressing Challenges: Integrate AI Gradually       

Given the potential challenges of introducing modern technologies into classrooms, it is advisable to integrate AI gradually. A phased approach allows educators to experiment with AI tools in a controlled manner, identifying any technical or ethical issues before scaling up their use (Tlili et al., 2023). Initially, AI tools can be used in low-stakes activities, such as generating discussion prompts or summarizing readings, giving students an opportunity to become familiar with AI without feeling overwhelmed by its capabilities (Imran et al., 2023).

As educators and students become more comfortable with AI, the complexity of tasks involving AI can increase. For example, students might use AI to conduct research on historical events, evaluate primary and secondary sources, or create multimedia presentations that integrate AI-generated content. By gradually increasing the difficulty of AI-related tasks, educators can scaffold students’ learning experiences, helping them build the skills necessary to critically engage with AI-generated materials. This approach also provides teachers with the flexibility to refine their instructional strategies, adjusting how AI is used based on student feedback and learning outcomes (Berson & Berson, 2024a). Professional development opportunities should accompany this gradual integration, offering educators the training they need to effectively navigate the challenges of AI use in education (Adams et al., 2023; Bahroun et al., 2023; Heafner & Ziv, 2024a).

Pillar 3. Verify Information and Outputs Consistently

Generative AI tools like ChatGPT, Gemini, and Copilot can generate text-based content, including summaries, explanations, and even historical narratives. However, these tools are not without limitations. Since AI systems are imperfect and may fail to adequately represent these historically marginalized perspectives, educators must play an active role in curating and supplementing AI-generated content (IEAIED, 2021). AI systems, particularly those trained on large datasets, often reflect societal biases, which can perpetuate stereotypes or exclude historically marginalized perspectives (Darics & Poppel, 2023; Haenlein & Kaplan, 2019; Kung et al., 2023; Mollick & Mollick, 2023; UNESCO, 2023). Yet even these limitations present important social studies inquiries, such as how might GenAI be used to (a) scrutinize whose histories are centered or omitted or (b) amplify underrepresented voices rather than erasing them?

Foster Awareness of AI’s Potential for Biased or Inaccurate Output

In social studies education, where understanding diverse viewpoints and accurate representation of historical events is essential, it is critical that students learn to evaluate AI-generated content with a discerning eye. GenAI models, such as ChatGPT or Gemini, may generate biased or incomplete historical narratives and cultural representations, overemphasizing Western perspectives while minimizing the contributions of non-Western civilizations, Indigenous peoples, and historically marginalized communities (Bender et al., 2021; Clark & van Kessel, 2024, 2025). This makes it crucial for students to engage with multiple perspectives and to practice multiperspectivity to grasp the complexities of historical events from a well-rounded view.

Clark and van Kessel (2024) found that AI models like ChatGPT and Bing frequently failed to incorporate critical perspectives on topics such as Indigenous sovereignty and the Civil Rights Movement, instead defaulting to hegemonic, Western-centric narratives that minimized the contributions of marginalized communities. To promote inclusivity, educators should select AI tools designed with fairness and inclusivity in mind.

One practical application of this pillar is encouraging students to engage in activities that highlight and address bias in AI-generated content. For example, if a GenAI tool generates a summary of the Gettysburg Address, educators can instruct students to compare the GenAI output with the actual text of the speech. By highlighting any discrepancies between the original speech and the AI-generated version, students can develop their ability to critically evaluate content and understand the importance of relying on original historical documents.Or when studying the colonization of the Americas, educators might have students generate AI responses from both European and Indigenous perspectives. This exercise can help students identify which perspectives are overrepresented in AI-generated content and which are omitted, encouraging a more critical and inclusive analysis of historical events (Lee et al., 2021).

Invite Students to Investigate Examples of Bias

Another approach is to use GenAI to generate multiple perspectives on a single historical event and have students debate the merits and shortcomings of each AI-generated perspective. For instance, students could prompt a GenAI tool to generate narratives about the causes of World War I from different national viewpoints (e.g., German, French, British). Afterward, students can evaluate the accuracy and bias in each narrative, discussing which perspectives are emphasized or omitted. This process helps students develop critical thinking skills by encouraging them to analyze how diverse groups interpret historical events and the potential biases embedded in AI outputs.In addition to history, social studies also involves civic education. AI tools can be used to generate explanations of how laws are passed or how the electoral college works. However, because AI systems may oversimplify or misunderstand procedural complexities, educators can teach students to fact-check these AI-generated explanations by referencing trusted civic education resources, such as government websites or peer-reviewed texts on American government. This approach aligns with the NCSS’s emphasis on promoting informed civic engagement (NCSS, 2013).

Another strategy is to encourage students to conduct bias audits of AI-generated responses. For instance, when discussing colonization, students can use AI tools to generate narratives from the perspectives of both colonizers and the colonized. This exercise helps students identify which perspectives are privileged and which are downplayed or excluded in AI-generated content (Lee et al., 2021). Such practices align with social studies’ broader goal of teaching students to critically evaluate sources and understand how power and perspective shape historical narratives (NCSS, 2013, 2022).

Moreover, Clark and van Kessel (2024, 2025) argued that without critical prompting, AI models perpetuate dominant narratives, necessitating educators’ active role in guiding students to interrogate AI-generated materials for inherent biases and inaccuracies. This work requires students to be attuned not only to the factual accuracy of AI-generated content but also to its potential ideological and epistemological biases, such as the dangers of ahistoricism and misrepresentation (Clark & van Kessel, 2025), particularly when AI reflects dominant cultural assumptions or obscures marginalized perspectives.

Another powerful strategy for using generative AI in social studies classrooms is the creation and engagement of AI-generated personas; for example, historical, philosophical, or culturally situated figures such as Frederick Douglass, James Baldwin, or a bias-detecting editor trained in African Diaspora scholarship to critique, question, and dialog with various sources. These personas can serve as dynamic, dialogic tools that invite students into deeper inquiry, allowing them to explore historical events, civic texts, and contemporary issues through the lens of diverse intellectual traditions.

When students interact with AI simulations of these figures, they practice multiperspectivity and historical empathy, gaining insights into how perspectives shaped by race, resistance, philosophy, or identity challenge dominant narratives and historical silences. For example, engaging with an AI representation of Douglass or Baldwin can illuminate how foundational civic texts like the Constitution or the Declaration of Independence are perceived through a lens of lived injustice, prompting students to grapple with contradictions between democratic ideals and historical realities. Similarly, using an AI persona trained in Black feminist or postcolonial critique to evaluate textbook language or media articles can help students uncover implicit bias and surface historically marginalized viewpoints.

These activities not only foster critical thinking and inquiry-based learning but also support culturally responsive pedagogy by amplifying underrepresented voices in the curriculum. However, educators must frame such engagements carefully, clarifying that these personas are simulations derived from patterns in training data, not exact reproductions, and guiding students to reflect critically on the ethical and epistemological dimensions of “speaking for” historical figures. In this way, AI personas become not replacements for human insight, but tools that provoke critical analysis, comparative interpretation, and deeper civic understanding.

Incorporate Ethical Discussions Into the Curriculum

AI’s presence in the classroom provides an opportunity for educators to engage students in discussions about the ethical implications of these technologies. In social studies, where students examine the impact of technology on society, AI can serve as a case study for exploring broader ethical issues such as bias, surveillance, and data privacy (Williams et al., 2024). Educators should integrate ethical discussions into their lessons, encouraging students to reflect on the role AI plays in shaping public discourse and historical narratives. Students should be encouraged to think critically about how AI-generated content might influence their understanding of history or civic issues. For example, teachers can ask students to analyze how AI tools present different perspectives on historical events and discuss whether these perspectives are complete or biased (Berson & Berson, 2024b).

Additionally, students can debate the ethical implications of AI in the public sphere, exploring topics such as AI-driven decision-making in politics, law enforcement, and business. By integrating ethical discussions into education spaces, educators help students develop the skills they need to navigate an increasingly AI-driven world while promoting responsible digital citizenship. These discussions can also reinforce students’ understanding of the importance of critical thinking and inquiry-based learning, core tenets of social studies education (NCSS, 2013, 2022).

Pillar 4. Informed Inquiry: Using AI to Explore Complex Social Studies Questions           

Inquiry-based learning is a principal component of social studies education, as it encourages students to explore historical events, civic processes, and social issues through questioning, investigation, and critical thinking. Generative AI tools offer a new dimension to inquiry-based learning by providing students with access to vast amounts of information and interactive experiences that can deepen their understanding of historical and civic concepts. Using GenAI to inform inquiry in the social studies classroom also creates opportunities for students to build critical thinking and media literacy skills, which are essential for navigating the complexities of digital information (Breakstone et al., 2020). By integrating AI into the inquiry-based learning model, educators can enhance students’ ability to engage with complex topics, develop research skills, and connect historical events to contemporary issues.

However, for AI to support inquiry rather than shortcutting the inquiry process, it must be embedded within a structured learning cycle aligned with core disciplinary practices and curricular objectives, as noted in Pillar 1. The NCSS (2013) C3 Framework provides a solid foundation for this process through its emphasis on developing questions, applying disciplinary tools, evaluating sources, and taking informed action. AI, when integrated responsibly, can serve as support for these processes rather than shortcuts. Furthermore, it is essential for social studies educators to explicitly discuss with students the differences between AI as a shortcut versus AI as a scaffold to the inquiry process. Table 2 provides examples of these types of teaching points for educators.

Table 2
Examples of AI as an Inquiry Shortcut vs. an Inquiry Scaffold

AI as an Inquiry ShortcutAI as an Inquiry Scaffold
A student prompts GenAI with a question, and copies the answer directly (i.e., How did the New Deal directly impact the 1930s economy?).Students draft an initial question, use AI to brainstorm sub-questions for further investigation, complete a student-created reflection on the quality and purpose of sub-questions, and then verify all AI responses using source material provided by the instructor.
AI is used to generate an essay on a student's behalf.AI helps generate an outline for an upcoming writing assignment or provides feedback on an existing student draft.
A student asks an AI tool for a list of facts to memorize for an upcoming test.AI is used to simulate multiple historical perspectives, and then the student critiques the AI output for omissions or biases based on their content knowledge from other source material provided in class.

There are a few key strategies educators may find helpful to encourage students further to leverage AI as an inquiry support rather than an inquiry shortcut. First, educators can begin by encouraging students to submit their original work before AI is introduced into the learning process. Then, when AI is integrated into the learning experience, students can be encouraged to highlight or annotate which portions of the activity were AI-generated versus human-created. As an additional point of evidence, educators may also choose to ask students to provide the detailed record of their interaction with an AI tool, including verbatim prompts and outputs. Students can then be encouraged to apply the principles of Pillar 3 as they reflect on their process for verifying the AI outputs, reflecting on what the AI tool may have missed, oversimplified, or outright hallucinated. In each instance, though, students are encouraged to engage in meaningful inquiry experiences, leveraging AI tools in a planned, structured manner that emphasizes and reinforces the importance of student reasoning. Additional examples of how AI can be used as a supportive tool to encourage inquiry-based learning among students are provided in the sections that follow.

Generate Multiperspectivity

One of the key applications of AI in fostering inquiry-based learning is its ability to support historical inquiry by providing students with access to diverse perspectives and sources of information. AI tools can generate narratives, timelines, and simulations that allow students to explore historical events from multiple angles, helping them understand the causes, consequences, and significance of those events (Imran et al., 2023). For example, when studying the causes of the American Revolution, students can use GenAI to generate perspectives from both the British and American sides, comparing the motivations and grievances of each group. Similarly, GenAI could be a guide in exploring the contestations of the 14th Amendment. In each case, GenAI helps students engage in deeper analysis of conflicts, divisions, strategies, compromises, or resolutions while also encouraging them to consider how different historical actors experienced and interpreted events.

In addition to generating narratives, AI can assist students in exploring primary source documents, such as photographs, art, music, speeches, letters, and legal texts. By using AI-powered search tools, students can quickly locate and analyze primary sources relevant to their inquiry, allowing them to draw connections between historical evidence and the broader historical context (Lee et al., 2021). However, they should also recognize that generated outputs or curation of sources are never neutral. As with any knowledge archive, the purposes are governed by the ecosystems of creation (Trouillot, 1995). These GenAI approaches align with the goals of social studies education, which emphasize the importance of using primary sources to understand history and develop critical thinking skills (NCSS, 2013, 2022).

Simulate Civic Processes

AI tools can support inquiry-based learning by allowing students to participate in simulations of historical events and civic processes. For instance, students could use an AI-powered tool to participate in a mock election, exploring how campaign strategies, voter behavior, and electoral systems influence election outcomes. This not only helps students understand the mechanics of the electoral process but also encourages them to think critically about the role of citizens in a democracy (Heafner & Ziv, 2024b; Talan, 2021). By integrating AI into these simulations, educators can create opportunities for students to engage in authentic inquiry that connects historical knowledge to contemporary civic issues.

Connect Historical Thinking to Current Events

In social studies, inquiry-based learning is not limited to understanding the past; it also involves drawing connections between historical events and contemporary social, political, and economic issues. AI tools can facilitate this process by helping students explore how the lessons of history apply to current events (Berson & Berson, 2024b). For example, when studying the Civil Rights Movement, students can use AI to compare the strategies and goals of civil rights leaders with those of contemporary social justice movements. This encourages students to think critically about the ongoing struggle for equality and to consider how historical movements have shaped modern debates about race, justice, and civil rights.

In addition, AI can help students investigate the ways historical policies, such as New Deal programs, continue to impact modern economic systems and social welfare policies. By connecting historical inquiry to contemporary issues, AI tools encourage students to see history as relevant and dynamic, rather than as a static collection of past events (NCSS, 2022). This approach fosters civic engagement by helping students understand how historical knowledge informs their roles as citizens in a democratic society.

AI-supported inquiry is most beneficial when it preserves and even extends students’ roles as investigators. When used creatively and critically, AI tools can help to create opportunities for students to engage in complex, civic-minded thinking with 21st-century digital tools. In each instance, though, it must remain clear that the inquiry belongs to the student, not the technological tool.

Pillar 5. Cultivate Future-Ready Citizens: Building AI Literacy for Civic Life

As artificial intelligence (AI) continues to permeate all aspects of modern life, revisiting two of Melvin Kranzberg’s (1986) laws is prudent. Kranzberg, a technological historian, synthesized decades worth of study into the development and implementation of technologies into a set of six laws. Kranzberg’s law states, “Technology is neither good nor bad; nor is it neutral” (p. 545), and his list concludes with his sixth law: “Technology is a very human activity, and so is the history of technology” (p. 557). These two principles reveal an essential truth, just as the present is shaped directly by the consequences of historical decisions, so too will the future with technology be shaped by the decisions humans continue to make as we navigate an increasingly AI-influenced world.

Development of AI literacy skills is increasingly important for students because of the need to understand their agency to make present decisions and their capacity to consider critically the consequences of those decisions. Agency is their power to be the human leaders in the loop. AI literacy involves understanding how AI systems work and how humans interact with these technologies, recognizing their limitations, and critically engaging with AI-generated content. In social studies education, AI literacy is especially relevant because of the growing role AI plays in shaping human lives, from permeating public discourse to influencing political decisions and impacting civic engagement. To cultivate future-ready citizens for an AI-integrated world, educators must integrate AI literacy into the curriculum, ensuring that students are equipped with the knowledge and skills to navigate the ethical, social, and political consequences of AI.

This pillar does not suggest that AI systems, or the companies that create them, are inherently designed to promote democracy or civic education. In fact, as many critics (e.g., Haenlein & Kaplan, 2019; Reia et al., 2025) have rightly pointed out, most commercial AI companies operate with opaque algorithms and business models that raise significant concerns about surveillance, data privacy, and democratic accountability. Rather, our argument is that social studies educators play a vital role in helping students critically examine and navigate these AI-driven systems. The aim is to cultivate future-ready citizens who possess the critical digital literacy skills to interrogate AI’s impact on society, recognize its potential threats to democracy, and exercise agency in holding technologists, policymakers, and institutions accountable (Clark & van Kessel, 2025; Ma et al., 2025).

Digital Citizenship

Promoting responsible digital citizenship is essential for educators in the era of AI. As students increasingly rely on AI for research and learning, it is critical that they understand the broader ethical implications of these technologies, including issues related to bias, misinformation, and the societal impact of AI-generated content (IEAIED, 2021; UNESCO, 2023; Zeide, 2017). Social studies classrooms, which are designed to prepare students for informed civic participation, offer an ideal setting for exploring these topics. One way to promote digital citizenship is by teaching students to critically evaluate the sources of AI-generated content. For example, when using AI to generate historical narratives, students should be encouraged to investigate the datasets that inform the AI’s responses and critically consider the implications when the data used to train an AI is not acknowledged or readily available. Is it possible for students to easily discern the dataset used to inform the AI’s response? If so, are these datasets diverse and inclusive? Do they reflect multiple perspectives, or are they biased toward a particular viewpoint?

However, because most commercially available GenAI tools are trained on proprietary or undisclosed datasets, students and educators are often simply unable to verify their origins (Haenlein & Kaplan, 2019). Rather than requiring transparency of datasets that may not be feasible, educators should help students ask critical questions about how these systems are trained, what assumptions the systems may encode, and what perspectives might be excluded (Adams et al., 2023; Bender et al., 2021). By examining the origins, or lack of transparency about the origins of AI-generated content, students can develop a more critical understanding of how AI shapes the information they encounter (Berson & Berson, 2024a, b). Educators and students can also use this line of reasoning to determine whether or not to integrate a specific AI tool into their curriculum or learning experiences. If the dataset used to train the AI system is unclear, we suggest searching for an alternative tool with greater transparency (Cherner et al., 2024).

Another aspect of digital citizenship is opening a critical dialog with students about the potential societal impact of AI, including its role in shaping public discourse and influencing political decisions (IEAIED, 2021). For instance, students can explore how AI-driven algorithms on social media platforms affect the spread of misinformation or how AI tools are used in decision-making processes in government and law enforcement (Williams et al., 2024). These discussions not only enhance students’ understanding of AI but also prepare them to navigate the ethical challenges posed by AI in their everyday lives.

In addition, students should be encouraged to reflect on how AI systems can be used to manipulate information, particularly in the context of disinformation and deepfake technologies. Educators can teach students to recognize deep-fakes, AI-generated videos or audio recordings that convincingly depict people saying or doing things they never actually did, and discuss the implications of these technologies for truth and trust in the media. By fostering media literacy skills, educators prepare students to engage with digital content critically and responsibly in ways that support human agency (Shneiderman, 2020), while remaining cautious about overstating AI’s democratic potential without also addressing its risks and potential unintended consequences (Reia et al., 2025; Williams et al., 2024).

In a world where AI companies often act as data brokers and wield disproportionate influence over information flows, social studies education must foreground digital skepticism and critical analysis. Educators should equip students not only to use AI tools, but also to question the motives, design, and governance of these technologies, asking, “Whose interests do these systems serve?” “Who is harmed or excluded?” “What forms of surveillance or manipulation are enabled by AI?” (Bender et al., 2021; Haenlein & Kaplan, 2019).

Fostering AI Literacy

One of the most critical aspects of AI literacy is understanding how generative AI systems may influence the information students encounter. AI algorithms, especially those used by social media platforms and search engines, play a key role in curating content, shaping public opinion, and amplifying certain narratives over others (UNESCO, 2023). This has profound implications for social studies education, where students are taught to analyze media, interpret historical events, and engage with civic issues. Educators should teach students how AI systems determine what content is presented to users and how algorithms prioritize certain information based on data patterns and user behavior (Darics & Poppel, 2023; IEAIED, 2021; Shneiderman, 2020).

For example, in a lesson on media literacy, students could explore how AI algorithms influence the articles and news stories they see on social media platforms. They can examine how these algorithms might reinforce filter bubbles or echo chambers, where individuals are exposed only to content that aligns with their preexisting beliefs (UNESCO, 2023; Zeide, 2017). This analysis helps students recognize the role AI plays in shaping their understanding of current events, historical narratives, and political ideologies.

Another important aspect of AI literacy is understanding how AI systems are used in public discourse and decision-making processes. AI tools are increasingly used in areas such as law enforcement, voting systems, and public policy, which makes it essential for students to understand the ethical and social implications of these technologies (Williams et al., 2024). For example, students could investigate how AI is used in predictive policing, where algorithms are employed to anticipate where crimes are likely to occur based on historical data (Rahimi & Talebi Bezmin Abadi, 2023).

This raises important ethical questions about bias, fairness, and the potential for AI to perpetuate systemic inequalities. In a social studies classroom, educators can use case studies and real-world examples to help students explore the societal impacts of GenAI. For instance, students could examine the role of AI in the 2020 U.S. presidential election, analyzing how AI tools were used in political campaigns, voter outreach, and media coverage. By discussing the ethical dilemmas associated with GenAI’s role in shaping public opinion and political outcomes, students develop a deeper understanding of how AI influences democratic processes and civic life (Berson & Berson, 2024a; Mollick, 2024). Educators and students can use informed inquiry and skepticism to question AI-driven narratives, recognize the influence of GenAI in decision-making, assess its social footprint, scrutinize the motives of technologists, and hold governments accountable for ensuring transparency, ethical use, and digital equity.

Promoting AI literacy must include a focus on the ethical dimensions of AI use. As AI systems become more integrated into everyday life, students need to understand the ethical issues related to data privacy, algorithmic transparency, and the societal impacts of AI. In a social studies context, educators can explore how AI intersects with issues such as justice, equity, and human rights (Heafner & Ziv, 2024b; Talan, 2021). For example, students might investigate how AI is used in hiring practices, healthcare, or criminal justice, analyzing the ethical concerns associated with AI decision-making in these areas. Through classroom discussions, case studies, and ethical debates, educators can help students explore the trade-offs between innovation and ethical responsibility in AI development. This approach encourages students to consider how AI technologies should be regulated and what ethical principles should guide their use in society (Zeide, 2017).

Rather than positioning AI as a force for democracy by default, social studies educators must teach students to recognize the power dynamics, ethical dilemmas, and civic risks inherent in these technologies. Critical AI literacy, therefore, means preparing students to advocate for transparency, equity, and democratic accountability in how AI systems are created, governed, and used in society (Clark & van Kessel, 2025; Ma et al., 2025). By integrating ethical considerations into AI literacy, educators empower students to think critically about the role of technology in shaping the future.

Integrate AI-Specific Professional Learning Experiences

The successful implementation of AI in social studies requires that educators be well-equipped to navigate both the technical and ethical aspects of AI tools. Professional development programs that address AI’s capabilities, limitations, and ethical implications are essential for helping teachers integrate these tools responsibly (Adams et al., 2023; Chan & Zhou, 2023; Jo, 2023; Kasneci et al., 2023; Shneiderman, 2020; Stöhr et al., 2024). Educators must understand how to use AI to enhance student engagement while mitigating risks such as bias, data privacy concerns, and overreliance on technology.

Teacher collaboration plays a critical role in the effective use of AI. By working together, educators can share best practices, lesson plans, and classroom experiences related to AI integration (Bergmark, 2023). Collaboration can occur within teacher-led professional learning communities, creating opportunities for educators to explore innovative ways to use AI to support student inquiry and critical thinking. Additionally, teachers can reflect on how AI has been used in their classrooms, adjusting based on feedback from peers and students. Ongoing professional development should not only focus on the technical aspects of using AI but also address the ethical questions it raises (Adams et al., 2023). For instance, how can teachers balance AI’s ability to generate content with the need to develop students’ own research and writing skills? How can educators ensure that AI tools are used to promote equity rather than exacerbate the digital divide? Addressing these questions through professional development helps educators develop their AI literacy, improving their capacity to make informed decisions about AI integration (Chan & Zhou, 2023; Stöhr et al., 2024; Tlili et al., 2023).

Monitor and Evaluate AI’s Impact on Learning Outcomes

The novelty of GenAI technologies like ChatGPT, only released to the public in November of 2022, limits our present knowledge on the long-term impacts of these tools to education, and limits the opportunities to have evaluated impacts on learning outcomes (Imran et al., 2023). To ensure that AI is being used effectively in the classroom, educators must continuously monitor and evaluate its impact on student learning. Formative assessments can be used to gauge how well students are engaging with AI tools and whether these tools are contributing to their understanding of historical concepts (Shah, 2023).

Teachers can use reflective journals or digital portfolios to track students’ interactions with AI, encouraging students to document their learning processes and evaluate the accuracy of AI-generated content independent of their own reasoning. Tools like the Oregon State University Ecampus (2024) Bloom’s Taxonomy Revisited can be used to guide assessments of these skills, prioritizing distinctly human capacities such as ethical judgment, civic reasoning, and creativity.

Soliciting student feedback on the use of AI is another important strategy. By asking students how AI tools have influenced their learning experiences, teachers can make informed decisions about how to adjust their instructional strategies. For example, if students report that AI is enhancing their research skills but not their critical thinking, educators might shift their focus to activities that encourage deeper analysis of AI-generated content (Tlili et al., 2023).

Evaluating AI’s impact on learning outcomes also involves assessing whether AI tools are meeting their intended goals. Are students becoming more adept at analyzing historical events? Are they developing stronger inquiry skills? Are they critically engaging with AI-generated content? By continuously evaluating the effectiveness of AI, educators can ensure that it contributes positively to student learning. Ultimately, the work of cultivating “future-ready citizens” in an AI-driven society is not to embrace technology uncritically, but to help students become vigilant, critical, and ethically responsible actors, able to recognize when AI advances, impedes, or threatens civic life. This includes the courage to challenge the undemocratic practices of powerful technology companies and advocate for a digital future that genuinely serves democratic ideals.

Future Implications and Directions for AI in Social Studies Education

As AI and GenAI continue to evolve, its role in education, particularly in social studies, is likely to expand in both scope and influence. The ability of AI to impact the way students interact with historical content, civic education, and digital information raises important questions about the future of teaching and learning in the 21st century. This section explores the potential future implications of AI in social studies education, outlining areas for further development, research, and the importance of ongoing professional development for educators.

One of the more direct developments in AI for social studies is its potential to democratize access to historical content. AI-powered tools can digitize, categorize, and make historical archives, primary sources, and artifacts more widely accessible to students around the world, including those in underserved communities. The use of AI in digitization efforts, such as the Newspaper Navigator project (Lee et al., 2021), has already begun to enhance students’ ability to explore historical documents previously limited to libraries and archives.

In the future, AI tools could expand students’ access to even more historical sources, including those from marginalized voices that are often underrepresented in traditional history curricula. For example, AI could help uncover and elevate the stories of Indigenous peoples, women, and other marginalized groups by searching archives for documents and records that might otherwise remain obscure (Shah, 2023). By expanding access to these resources, AI could promote a more inclusive and comprehensive understanding of history, helping students engage with diverse perspectives and challenging dominant narratives.

However, the development of AI tools for this purpose must be accompanied by a focus on ethical considerations, particularly regarding historical accuracy and bias. Educators and technologists must work together to ensure that AI tools are not only making historical content more accessible but also doing so in a way that reflects the complexity and nuance of historical events (Haenlein & Kaplan, 2019). Further research is needed to explore how AI can be leveraged to build more equitable access to historical education while maintaining the integrity of the sources it analyzes and presents.

Generative AI’s ability to simulate civic processes and model democratic systems also present opportunities for fostering civic education. As students become more immersed in digital spaces, it will be increasingly important for educators to teach them how AI shapes civic engagement, media consumption, and political discourse (Williams et al., 2024). In the future, AI tools could be developed to simulate complex governmental processes, allowing students to engage in realistic role-playing exercises where they serve as legislators, judges, or political leaders in a virtual environment.

For example, students could participate in AI-driven simulations of congressional debates, where they argue for or against policies based on real-world data. AI tools could also facilitate virtual town hall meetings or voting simulations, helping students understand the intricacies of electoral processes and the impact of public policy decisions. These immersive experiences would help prepare students for active participation in democracy, fostering critical thinking about how policies are created, debated, and implemented in a real-world context (Mollick, 2024; Shah, 2023). However, the future of AI in civic education must also address the potential for bias in AI-driven political simulations. Educators will need to ensure that AI tools used in the classroom provide balanced perspectives and do not inadvertently promote partisan views or reinforce existing political biases. This will require ongoing collaboration between educators, technologists, and policymakers to develop AI systems that enhance civic engagement without compromising objectivity or fairness.

As AI-generated content continues to proliferate across digital platforms, students must develop the skills to critically evaluate the information they encounter online (Breakstone et al. 2020). Media literacy will become increasingly important in the future as students grapple with issues such as disinformation, deepfakes, and AI-generated news (Mollick & Mollick, 2023). Social studies education is uniquely positioned to teach students how to navigate this landscape, as the discipline emphasizes the critical analysis of sources and the interpretation of information in historical and contemporary contexts.

AI can play a dual role in promoting media literacy. On the one hand, AI tools can help students identify credible sources by flagging unreliable content and providing contextual information about the origins of a given news story or historical document. On the other hand, AI-generated content can also be used as a teaching tool to help students understand how disinformation spreads and how digital manipulation can be used to distort reality. For instance, students could engage in exercises where they analyze AI-generated deepfakes and learn to detect the signs of digital manipulation in images, videos, and text (Rahimi & Talebi Bezmin Abadi, 2023). To ensure that students are adequately prepared for the challenges of AI-driven misinformation, future research should focus on how AI tools can be designed to promote digital literacy while also supporting students in developing a healthy skepticism toward the content they encounter online.

The successful integration of AI into social studies education depends not only on the development of effective tools but also on the preparedness of educators to use them. As AI technologies continue to evolve, ongoing professional development will be essential for teachers to stay current on best practices for using AI in the classroom (Adams et al., 2023; Zeide, 2017). This includes training on the ethical use of AI, strategies for fostering AI literacy among students, and methods for critically evaluating AI-generated content (Chan & Zhou, 2023; Cooper, 2023; Jo, 2023; Stöhr et al., 2024).

In the future, professional development programs could focus on helping educators become more comfortable with AI tools and understanding how to incorporate them into inquiry-based learning activities. For example, teachers could participate in workshops where they learn how to use AI to create personalized learning experiences for students or how to facilitate AI-driven simulations of historical and civic events. These programs should also address the ethical challenges associated with AI use, ensuring that teachers are equipped to guide students in navigating issues related to data privacy, bias, and academic integrity (Berson & Berson, 2024a). Moreover, as AI continues to shape public discourse and political decision-making, educators must stay informed about the broader societal implications of AI technologies. This will require collaboration between educators, researchers, and policymakers to ensure that AI integration in education is aligned with the values of democracy, equity, and ethical responsibility.

As AI continues to impact the landscape of education, ongoing research will be critical to understanding its long-term impact on social studies learning outcomes. Future research should examine how AI tools influence students’ critical thinking, civic engagement, and historical understanding. For example, studies could explore how AI-driven simulations affect students’ ability to empathize with historical figures or how AI-based personalized learning systems influence students’ mastery of complex historical concepts (Lee et al., 2021).

Research should also focus on the ethical implications of AI use in the classroom, particularly in terms of data privacy and bias. It is essential to investigate how AI systems can be designed to promote inclusivity and ensure that marginalized voices are represented in AI-generated historical narratives. By continuously evaluating the impact of AI on social studies education, educators and policymakers can make informed decisions about how to best leverage AI technologies to enhance learning while minimizing potential risks (Berson & Berson, 2024a).

Conclusion

The integration of artificial intelligence (AI) into social studies education holds promise for enhancing inquiry-based learning, critical thinking, and civic engagement. However, the effective and responsible use of AI requires guidelines that align with the values of social studies education. By promoting critical evaluation, ensuring inclusivity and equity, addressing ethical considerations, fostering inquiry-driven learning, and developing AI literacy, educators can harness and interrogate the power of AI while safeguarding the integrity of social studies education.

As AI continues to evolve, it will be essential for educators to remain informed about the potential benefits and challenges of these technologies. Ongoing professional development, research, and collaboration between educators, technologists, and policymakers will be key to navigating the ethical and practical implications of AI in the classroom. With thoughtful integration, AI has the potential to enrich social studies education by making content more accessible, fostering civic engagement, and equipping students with the skills they need to navigate an increasingly AI-driven world.

However, social studies educators and learners must also reckon with the potential of GenAI to dilute the foundations of democracy if appropriate guardrails are not exercised. The CIVIC pillars establish a framework to enable educators to leverage the potential benefits of AI in education while remaining mindful of the ethical challenges and societal implications, ensuring that AI and GenAI serve as tools for empowering students, fostering critical inquiry, and preparing them for the complexities of civic life in a society transformed by AI. As we move forward for more advanced forms of GenAI, the focus must remain on leveraging AI as a resource to create more democratic, inclusive, equitable, and meaningful learning experiences for all students.

References

Adams, C., Pente, P., Lemermeyer, G., & Rockwell, G. (2023). Ethical principles for artificial intelligence in K-12 education. Computers and Education: Artificial Intelligence, 4, 100131. https://doi.org/10.1016/j.caeai.2023.100131

Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 15(17), 12983. https://doi.org/10.3390/su151712983

Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. https://doi.org/10.2139/ssrn.4337484

Baker, T., & Smith, L. (2019). Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges. Nesta Foundation. https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf

Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Economic Sciences, 122(26) e2422633122. https://doi.org/10.1073/pnas.2422633122

Bender, E. M., Gebru, T., & McMillan-Major, A. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). Association for Computing Machinery.

Bergmark, U. (2023). Teachers’ professional learning when building a research-based education: Context-specific, collaborative and teacher-driven professional development. Professional Development in Education, 49(2), 210–224. https://doi.org/10.1080/19415257.2020.1827011

Berson, M. J., & Balyta, P. (2014). Bringing the cybersecurity challenge to the social studies classroom. Social Education, 78(2), 96-100.

Berson, I. R., & Berson, M. J. (2024a). AI in K-12 social studies education: A critical examination of ethical and practical challenges. In A.M. Olney, I. A. Chounta, Z. Liu, O. C. Santos, I. I. Bittencourt (Eds.), Artificial intelligence in education. Posters and late breaking results, workshops and tutorials, industry and innovation tracks, practitioners, doctoral
consortium and blue sky. AIED 2024. Communications in Computer and Information Science
(Vol. 2150; pp. 101-112). Springer. https://doi.org/10.1007/978-3-031-64315-6_8

Berson, I. R., & Berson, M. J. (2024b). The democratization of AI and its transformative potential in social studies education. Social Education, 88(2), 114–118.

Berthelot, A., Caron, E., Jay, M., & Lefevre, L. (2024). Estimating the environmental impact of Generative AI services using an LCA-based methodology. Procedia CIRP, 122, 707–712. https://doi.org/10.1016/j.procir.2024.01.098

Breakstone, J., Smith, M., Wineburg, S., & McGrew, S. (2020). Teaching students to navigate the online landscape. Social Education, 84(4), 217-221.

Chan, C. K. Y., & Zhou, W. (2023). An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI. Smart Learning Environments, 10(64). 1-22. https://doi.org/10.1186/s40561-023-00284-4

Cherner, T., Foulger, T. S., Donnelly, M. (2024). Introducing a generative AI decision tree for higher education: A synthesis of ethical considerations from published frameworks & guidelines. TechTrends, 69, 84–99. https://doi.org/10.1007/s11528-024-01023-3

Choate, K., Goldhaber, D., & Theobald, R. (2021). The effects of COVID-19 on teacher preparation. Phi Delta Kappan, 102(7), 52–57. https://doi.org/10.1177/00317217211007340

Clark, C. H., & van Kessel, C. (2024). “I, for one, welcome our new computer overlords”: Using artificial intelligence as a lesson planning resource for social studies. Contemporary Issues in Technology and Teacher Education, 24(2), 151- 183. https://citejournal.org/volume-24/issue-2-24/social-studies/i-for-one-welcome-our-new-computer-overlords-using-artificial-intelligence-as-a-lesson-planning-resource-for-social-studies/

Clark, C. H., & van Kessel, C. (2025). AI in social studies education: Tools for thoughtful practice with generative artificial intelligence. Teachers College Press.

Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32(3), 444–452.

Coburn, M., Williams, S., & Stroud, C. (2022). Enhanced realism or AI-generated illusion? Synthetic voice in the documentary film Roadrunner. Journal of Media Ethics, 37(4),282–284. https://doi.org/10.1080/23736992.2022.2113883

Darics, E., & Poppel, L. (2023). Debate: ChatGPT offers unseen opportunities to sharpen students’ critical skills. The Conversation. https://theconversation.com/debate-chatgpt-offers-unseen-opportunities-to-sharpen-students-critical-skills-199264

Darwin, Rusdin, D., Mukminatien, N., Suryati, N., Laksmi, E. D., & Marzuki. (2023). Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations. Cogent Education, 11. 2290342. https://doi.org/10.1080/2331186X.2023.2290342

De Simone, M. E., Tiberti, F. H., Barron Rodriguez, M. R., Manolio, F. A., Mosuro, W., & Dikoru, E. J. (2025). From chalkboards to chatbots: Evaluating the impact of generative AI on learning outcomes in Nigeria. Policy Research Working Paper; RRR;People; Impact Evaluation Series. World Bank Group.  http://documents.worldbank.org/curated/en/099548105192529324

de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191–2194. https://doi.org/10.1016/j.joule.2023.09.004

Donovan, J., & boyd, d. (2021). Stop the presses? Moving from strategic silence to strategic amplification in a networked media ecosystem. American Behavioral Scientist, 65(2), 333-350. https://doi.org/10.1177/0002764219878229

Essel, H. B., Vlachopoulos, D., Essuman, A. B., & Amankwa, J. O. (2024). ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Computers & Education: Artificial Intelligence, 6, Article 100198.

Ethical Framework for AI in Education. (2020). The Institute for Ethical AI in Education. https://www.buckingham.ac.uk/research-the-institute-for-ethical-ai-in-education/

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Guo, Y., & Lee, D. (2023). Leveraging ChatGPT for enhancing critical thinking skills. Journal of Chemical Education, 100(12), 4876–4883.

Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5–14.

Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. (2022). “I think this is the most disruptive technology”: Exploring sentiments of ChatGPT early adopters using Twitter data (arXiv:2212.05856). https://doi.org/10.48550/arXiv.2212.05856

Hardesty, L. (2017, April 14). Explained: Neural networks. MIT News. https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

Hays, L., Jurkowski, O., & Kerr Sims, S. (2024). ChatGPT in K-12 education. TechTrends, 68, 281–294. https://doi.org/10.1007/s11528-023-00924-z

Heafner, T., & Ziv, E. (2024a). Artificial intelligence in education: Preparing educators and learners for the future. In J. Cohen & G. Solano (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference (pp. 768-778). Association for the Advancement of Computing in Education. https://www.learntechlib.org/primary/p/224039/

Heafner, T., & Ziv, E. (2024b). AI integration in social studies: A pathway to enhanced historical inquiry and civic engagement. In J. Cohen & G. Solano (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference (p. 1856-1862). Association for the Advancement of Computing in Education. https://www.learntechlib.org/primary/p/224223/

Hennessy, S., Cukurova, M., Lewin, C., Mavrikis, M., & Major, L. (2024). BJET Editorial 2024: A call for research rigour. British Journal of Educational Technology, 55(1), 5–9. https://doi.org/10.1111/bjet.13426

Imran, M., Shahid, A. R., Hou, M., & Imteaj, A. (2023). From early adoption to ethical adoption: A diffusion of innovation perspective on ChatGPT and large language models in the classroom. TechRxiv. doi: 10.36227/techrxiv.170630660.06963201/v1

The Institute for Ethical AI in Education. (2021). The ethical framework for AI in education. University of Buckingham. https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf

Jo, H. (2023). Decoding the ChatGPT mystery: A comprehensive exploration of factors driving AI language model adoption. Information Development, 0(0). 1–21. https://doi.org/10.1177/02666669231202764

Karpouzis, K., Pantazatos, D., Taouki, J., & Meli, K. (2024). Tailoring education with GenAI: A new horizon in lesson planning. arXiv. https://doi.org/10.48550/arXiv.2403.12071

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G.,Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O.,Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Kestin, G., Miller, K., Klales, A. et al. (2025). AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting. Scientific Reports, 15, 17458. https://doi.org/10.1038/s41598-025-97652-6

Khalil, M., & Er, E. (2023). Will ChatGPT get you caught? Rethinking of plagiarism detection. In P. Zaphiris, & A. Ioannou (Eds.), Learning and collaboration technologies. HCII 2023. Lecture notes in computer science (Vol. 14040, pp. 475-487). Springer. https://doi.org/10.1007/978-3-031-34411-4_32

Kosar, T., Ostoji´c, D., Liu, Y. D., & Mernik, M. (2024). Computer science education in ChatGPT era: Experiences from an experiment in a programming course for novice programmers. Mathematics, 12(5), 629.

Kos’myna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X-H, Beresnitzky, A. V., Braustein, I., Maes, P. (2024). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing. MIT Media Labhttps://arxiv.org/pdf/2506.08872

Kranzberg, M. (1986). Technology and history: “Kranzberg’s Laws”. Technology and Culture, 27(3). 544-560. https://doi.org/10.2307/3105385

Krupp, L., Steinert, S., Kiefer-Emmanouilidis, M., Avila, K. E., Lukowicz, P., Kuhn, J., Küchemann, S., & Karolus, J. (2024). Unreflected acceptance–investigating the negative consequences of chatgpt-assisted problem solving in physics education. In Hhai 2024: Hybrid human AI systems for the social good (pp. 199–212). IOS Press.

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriag, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), Article e0000198. https://doi.org/10.1371/journal.pdig.0000198

Larosa, F., Hoyas, S., Garcia-Martinez, J., Conejero, J. A., Nerini, F. F., & Vinuesa, R. (2023). Halting generative AI advancements may slow down progress in climate research. Nature Climate Change, 13, 497–499. https://doi.org/10.1038/s41558-023-01686-5

Lee, B. C., Berson, I. R., & Berson, M. J. (2021). Machine learning and social studies: Unlocking historical content through AI. Social Education, 85(2), 88-92.

Li, J. (2024). Study on the positive and negative impacts of ChatGPT on the education system. Proceedings of the 3rd International Conference on Interdisciplinary Humanities and Communication Studies, 51, 59–60. https://doi.org/10.54254/2753-7064/51/20242449

Ligozat, A., Lefevre, J., Bugeau, A., & Combaz, J. (2022). Unveiling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions. Sustainability, 14(9), 5172. https://doi.org/10.3390/su14095172

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

Ma, M., Ng, D. T. K., Liu, Z., & Wong, G. K. W. (2025). Fostering responsible AI literacy: A systematic review of K-12 AI ethics education. Computers and Education: Artificial Intelligence, 8, 100422. https://doi.org/10.1016/j.caeai.2025.100422

Maxwell, D. (2023). Handle with care: Generative AI in the classroom. North Carolina Association for Middle Level Education Journal. 34(2), 8–12.

Mishra, P., Oster, N., & Henriksen, D. (2024). Generative AI, teacher knowledge and educational research: Bridging short- and long-term perspectives. TechTrends, 68(2), 205–210. https://doi.org/10.1007/s11528-024-00938-1

Mollick, E. (2024). Co-intelligence: Living and working with AI. Penguin.

Mollick, E. (2025). Against “brain damage.” One Useful Thing. https://www.oneusefulthing.org/p/against-brain-damage

Mollick, E. R., & Mollick, L. (2023). Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts. SSRN Electronic Journal, 1-16. http://dx.doi.org/10.2139/ssrn.4391243

National Council for the Social Studies. (2013). College, career, and civic (C3) life framework for social studies state standards. National Council for the Social Studies. https://www.socialstudies.org/sites/default/files/c3/C3-Framework-for-Social-Studies.pdf

National Council for the Social Studies. (2022). Technology, digital learning, and social studies: A position statement. https://www.socialstudies.org/position-statements/technology-digital-learning-and-social-studies

National Council for the Social Studies. (2023). New definition of social studies. https://www.socialstudies.org/media-information/definition-social-studies-nov2023

Oregon State University Ecampus. (2024). Bloom’s taxonomy revisited. https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/blooms-taxonomy-revisited/

Parker, L., Carter, C., Karaka, A., Loper, A. J., & Sokkar, A. (2024). Graduate instructors navigating the AI frontier: The role of ChatGPT in higher education. Computers and Education Open, 6, 1-13. https://doi.org/10.1016/j.caeo.2024.100166

Rahimi, F., & Talebi Bezmin Abadi, A. (2023). ChatGPT and publication ethics. Archives of Medical Research, 54(3), 272–274.

Reia, J., Forelle, M. C., & Wang, Y. (Eds.). (2025). Reimagining AI for environmental justice and creativity. Digital Technology for Democracy Lab, University of Virginia. https://libraopen.lib.virginia.edu/public_view/3n203z326

Rudolph, J., Ismail, F., & Popenici, S. (2024). Higher education’s generative artificial intelligence paradox: The meaning of chatbot mania. Journal of University Teaching and Learning Practice, 21(6), 1-35. https://doi.org/10.53761/54fs5e77

Shah, P. (2023). AI and the future of education: Teaching in the age of Artificial Intelligence. Jossey-Bass.

Shen-Berro, J. (2023, January 6). New York City schools blocked ChatGPT. Here’s what other large districts are doing. Chalkbeat. https://www.chalkbeat.org/2023/1/6/23543039/chatgpt-school-districts-ban-block-artificial-intelligence-open-ai/

Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe, and trustworthy. International Journal of Human-Computer Interaction, 36(6), 495-504.

Sier, J. (2022, December 8) ChatGPT takes the internet by storm, bad poetry and all. Financial Review. https://www.afr.com/technology/chatgpt-takes-the-internet-by-storm-bad-poetry-and-all-20221207-p5c4hv

Sobaih, A. E. E., Elshaer, I. A., & Hasanein, A. M. (2024). Examining students’ acceptance and use of ChatGPT in Saudi Arabian higher education. European Journal of Investigation in Health, Psychology and Education, 14(3), 709-721. https://doi.org/10.3390/ejihpe14030047

Stanley, J. (2024). Erasing history: How fascists rewrite the past to control the future. One Signal Publishers.

Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7, 100–259. https://doi.org/10.1016/j.caeai.2024.100259

Strzelecki, A., & ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. British Journal of Educational Technology, 55(3), 1209–1230. https://doi.org/10.1111/bjet.13425

Subaveerapandiyan, A., Vinoth, A., & Tiwary, N. (2023). Netizens, academicians, and information professionals’ opinions about AI with special reference to ChatGPT. arXiv, 23-2.-7136v1. https://doi.org/10.48550/arXiv.2302.07136

Suriano, R., Plebe, A., Acciai, A., & Fabio, R. A. (2025). Student interaction with ChatGPT can promote complex critical thinking skills. Learning and Instruction, 95, 102011. https://doi.org/10.1016/j.learninstruc.2024.102011

Talan, T. (2021). Artificial intelligence in education: A bibliometric study. International Journal of Research in Education and Science, 7(3), 822-837. https://doi.org/10.46328/ijres.2409

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 15.

Trouillot, M.-R. (1995). Silencing the past: Power and the production of history. Beacon Press.

United Nations Educational, Scientific and Cultural Organization. (2023). Guidance for generative AI in education and research. https://doi.org/10.54675/EWZM9535

U.S. Department of Education, Office of Educational Technology. (2023). Artificial intelligence and future of teaching and learning: Insights and recommendations. ERIC. https://files.eric.ed.gov/fulltext/ED631097.pdf

van der Ven, H., Corry, D., Elnur, R., Provost, V. J., & Syukron, M. (2024). Generative AI and social media may exacerbate the climate crisis. Global Environmental Politics, 24(2), 9–18. https://doi.org/10.1162/glep_a_00747

Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12, Article 621. https://doi.org/10.1057/s41599-025-04787-y

Williams, S., Beery, S., Conley, C., Evans, M. L., Garces, S., Gordon, E., Jacob, N., & Medina, E. (2024). People-powered Gen AI: Collaborating with generative AI for civic engagement. MIT Leventhal Center for Advanced Urbanism. https://doi.org/10.21428/e4baedd9.f78710e6

Wiliam, D. (2017). Embedded formative assessment (2nd ed.). Solution Tree Press.

World Economic Forum. (2019). Generation AI: Establishing global standards for children and AI. https://www3.weforum.org/docs/WEF_Generation_AI_%20May_2019_Workshop_Report.pdf

Zeide, E. (2017). The structural consequences of big data-driven education. Big Data, 5(2), 164-172. doi: 10.1089/big.2016.0061.

Zorz, Z. (2023). A bug revealed ChatGPT users’ chat history, personal and billing data. Help Net Security. https://www.helpnetsecurity. com/2023/03/27/chatgpt-data-leak/ 

Loading