Launch of "Education for Sustainable Futures: Global Citizenship and the Earth Charter".
A brief online launch event will explore the book's key themes, and include short presentations from contributors.
This blog examines how generative AI intersects with global citizenship education (GCE). While large language models (LLMs) like ChatGPT are praised for expanding access to knowledge, they also risk reinforcing colonial hierarchies by privileging certain epistemologies, languages, and narratives. Far from neutral, these systems can narrow diversity, extract value from the Global South, and manipulate how learners understand global issues. To counter this, the blog argues for approaches grounded in epistemic justice, linguistic plurality, and critical AI pedagogy, ensuring that AI strengthens GCE as a project of justice and empowerment rather than homogenisation and exclusion.
Author: Michael Jones (Nazarbayev University in Kazakhstan).
The rise of Artificial Intelligence (AI) is sparking intense debate in universities and schools. Tools like ChatGPT are praised for their potential to democratise access to knowledge, giving students information in seconds that once took days to find. But in Global Citizenship Education (GCE), which aims to prepare students to think critically and act responsibly in a connected world, AI may also risk undoing hard-won progress.
The narrative of AI being objective is not a simple one, AI is not simply a neutral gateway to knowledge. Large Language Models (LLMs) decide what information is visible, whose languages dominate, and how knowledge is framed. In doing so, they reflect and reinforce existing power relations. Far from being purely emancipatory, these systems can narrow diversity and strengthen hierarchies. For GCE, this is the central tension: while AI can expand access, it can also limit voices. To protect justice and plurality, educators must question how AI reshapes knowledge and power in the classroom.
LLMs are trained on massive sets of data that are scraped largely off the internet. Many of the sources used are Western and most of them are in English. The result is an epistemic imbalance in which certain traditions of knowledge are favoured at the cost of other ones being erased or marginalised. Quijano’s (2000) coloniality of power is fundamental to an explanation of this: colonial logics endure not only through political or economic forms but through knowledge production itself. What is deemed to be legitimate knowledge is still largely defined by northern world epistemologies.
For Boaventura de Sousa Santos (2014), this process amounts to epistemicide, the systematic destruction of non-Western knowledge systems. Indigenous epistemologies, oral traditions, and regionally specific ways of knowing are either excluded from training data or represented inaccurately. The consequence is that when a student in Nairobi or Dhaka asks an AI system about democracy or climate change, the response is overwhelmingly filtered through liberal Western categories of thought.
Birhane (2021) describes this as algorithmic colonisation, particularly evident in Africa, where imported AI systems overwrite local realities with external assumptions. Fricker’s (2007) notion of epistemic injustice also applies: whole communities are denied credibility as knowers when their epistemologies are absent from the system. Rather than democratising knowledge, LLMs risk acting as epistemic gatekeepers, reinforcing a hierarchy that renders some voices audible and others invisible. For GCE, this exclusion is devastating: how can education claim to be global if vast domains of human knowledge are systematically omitted?
A second layer of coloniality emerges through language. LLMs are multilingual in design, but their fluency, accuracy, and nuance vary drastically between languages. Bender, Gebru, McMillan-Major, and Shmitchell (2021) show how performance is strongest for high-resource languages such as English and Mandarin, while under-resourced languages receive poor support. The imbalance reflects the uneven availability of training data, but it also reproduces what Phillipson (1992) famously termed linguistic imperialism: the dominance of English as a global language that marginalises others.
Canagarajah (1999) argued for resisting this dominance in classrooms by valuing linguistic diversity and legitimising local forms of English. Yet with LLMs, the pressure intensifies in the opposite direction. English becomes the default medium through which students must access “global” knowledge. A Kazakh-speaking student querying AI will often find that English responses are richer, more accurate, and more authoritative than those in Kazakh. This not only reinforces the symbolic power of English but risks positioning local languages as inadequate for academic engagement. This over time reduces the use of native and local languages as a site of legitimate academic debate.
This is not a mere technical inconvenience; it is a political issue. By privileging certain languages, AI consolidates linguistic hierarchies that undermine the plurilingual goals of GCE. Instead of fostering dialogue across diverse linguistic and cultural traditions, AI risks accelerating homogenisation. The very plurality that should be central to global education is threatened by a monolingual logic embedded in the algorithm.
There is another challenge which is less obvious but no less formidable, the manipulative forces of AI and their effects on the rhetoric of global citizenship. Generative AI is no longer accessing knowledge but shaping, individualising and persuading. There are three processes at stake here.
LLMs can construct synthetic but convincing content. At scale, synthetic content like text, images, and videos might be generated to support disinformation or distorted histories. For students in source-scarce classrooms, synthetic products might be mistaken as true evidence. This jeopardises the epistemic trust GCE requires. If students cannot recognize authentic and synthetic sources, their critical consideration of global challenges is undermined.
AI systems personalise information streams. Recommendation systems based on AI answers to individual students and shape the way knowledge is transmitted. Couldry and Mejias (2019) note how data extraction supports persuasion at scale. For education, it would mean GCE might be quietly influenced. Students will be presented with algorithmically tailored constructions of climate justice, immigration, or citizenship in ways agreeing with mainstream paradigms and shunning other interpretations. For an endeavour like GCE built on pluralism and liberation, such limitations are antithetical.
AI is embedded in attention economies to achieve maximum engagement. Surveillance capitalism views attention as a product to be extracted and commodified, as argued by Zuboff (2019). Generative AI paired with these logics will prefer emotive or simplistic storytelling. For students, it will incentivise superficial over reflective analysis. GCE requires slow consideration of structural injustice, colonial inheritance, and systemic inequity, exactly the forms of engagement least compatible with attention-optimising architectures.
Additionally, manipulation can occur through the perceived authority that AI systems are thought to hold. When asked to define “global citizenship”, LLMs will tend to emphasise liberal values of tolerance, sustainability, and intercultural dialogue. These are by no means necessarily flawed in themselves, but they represent a thin strand of GCE discourse. Broader critical framings, such as emphasising colonial histories or global injustices, are far less visible. Andreotti (2011) warned of the dominance of “soft” global citizenship shying away from critical examination of structure. By prioritizing such framings, AI clandestinely reconstitutes the GCE discourse to render liberal cosmopolitanism natural and universal.
The risk here is not only so much distortion, but the institutionalisation this could lead to. Teachers will be tempted to adopt AI-constructed lesson plans or summaries to be easy and efficient with, unwittingly imprinting algorithmically curated narratives in curricula. Slowly, GCE may be reconfigured by AI in ways responsive to corporate and policymaker discourse but not emancipatory pedagogy. Benjamin’s (2019) idea of a New Jim Code highlight’s how technologies will disguise themselves as neutral but can encode inequality. In GCE, this neutrality masks the possibility that AI may be steering learners toward certain conceptions of citizenship while leaving other conceptions in suspension.
If generative AI risks the narrowing of knowledge and the privileging of English, then reimagining its role in GCE requires a deliberate politics of resistance and reconstruction. The problems are not accidental but structural: they stem from how AI is built, trained, and deployed. Addressing them means engaging with questions of power, not just technology. Four areas of intervention, epistemic justice, linguistic plurality, data sovereignty, and critical AI pedagogy, offer starting points for decolonising the algorithm and preserving the transformative ambitions of GCE.
To counter epistemic exclusion, AI in education must be restructured around principles of epistemic justice. Fricker (2007) defines this as ensuring that all knowers are recognised as credible and that their contributions are valued. In practical terms, this means developing AI systems that integrate knowledge from Indigenous, Southern, and marginalised traditions rather than defaulting to Western epistemologies. Santos (2014) argues for ecologies of knowledge, where multiple epistemologies coexist without hierarchy.
For AI, this could involve creating regionally grounded datasets curated in collaboration with local communities, training LLMs on oral histories and cultural archives, and ensuring that Indigenous epistemologies are not tokenised but genuinely integrated. Mohamed, Png, and Isaac (2020) describe this as decolonial AI: building systems that reflect diverse ways of knowing and distribute epistemic authority more equitably. In GCE, this would mean students encountering not only liberal Western framings of democracy or climate change but also Indigenous perspectives on stewardship, communal governance, and interdependence. Rather than narrowing global knowledge, AI could expand it by surfacing epistemologies long excluded.
If LLMs reinforce linguistic imperialism, then a second area of intervention is linguistic plurality. Phillipson (1992) warned that English dominance reproduces inequality, while Canagarajah (1999) urged educators to validate local varieties and sustain multilingual practices. UNESCO (2023) echoes this call, emphasising that the future of equitable education must be multilingual.
For AI, this means large scale investment in language process for underrepresented and minority language, especially those from the Global South. Governments and institutions need to fund open-source research into the development of models that support local languages. A recent example of this is the ISSAI KAZ-LLM in Kazakhstan (Nakispekova, 2024), a LLM which was developed by a collaboration between Nazarbayev University and the Kazakh government. This has already spurred more investment in the country with the creation of the new Ministry of Artificial Intelligence (Omirgazy, 2025). Whilst this does not mean that linguistic inequalities will disappear, it is certainly a step in the right direction against the dominance of English. These kinds of efforts are critical if GCE is going to utilise the power of AI in ways which sustain cultural and linguistic plurality over homogenisation.
Pedagogy will also need to accompany these shifts in infrastructure. Teachers may leverage AI as a realm of comparative analysis and ask students to compose their answers in several languages and think about the dissimilarities in nuance, preciseness, and cultural framing. This is done not only to reinforce critical media-literacy but to develop an awareness of how worldviews are constructed by language. With this, AI itself is transformed into an accelerator of linguistic agency, so learners become active agents of maintaining linguistic diversity and are not just consumers of English-language knowledge. This is anything but a secondary consideration to GCE but a guiding principle: until and unless pedagogies are built to empower multilingual participation, the promise of global citizenship is in danger of remaining a monolingual, Eurocentric script.
Finally, the manipulative powers of AI require a pedagogical response. If algorithms can create content, individualise persuasion, and construct narratives of cosmopolitan citizenship, then students must be equipped to utilise AI and interrogate it. In this space, Freire’s (1974) concept of conscientization, raising critical awareness of oppression, can be transferred. GCE must include algorithmic literacy as a fundamental component and teach students to question where products are generated, whose interests are served, and whose voices are omitted.
Andreotti (2011) warns of “soft” GCE which avoids structural critique. Critical AI pedagogy may be an antidote by inviting students to consider how definitions of “global citizenship” differ in AI products, how narratives are assembled and how manipulation is induced by personalised and attention economies. Eaton (2023), writing on academic integrity in the age of AI, remarks on the necessity of teachers explicitly guiding students on how to critically evaluate sources generated by AI. In practice, it might entail a complete institutional rethink of assignments, where students are invited to present AI products and scholarly sources together side by side, or an overhaul of the essay entirely.
If generative AI systems are capable of fabricating content, personalising persuasion, and shaping attention, then education cannot afford to treat them as neutral tools. Learners require a pedagogy that explicitly addresses the political, social, and epistemic dimensions of these technologies. A critical AI pedagogy begins from the recognition that algorithms are not simply technical artefacts but cultural and ideological actors that mediate knowledge, identity, and citizenship.
Building on Freire’s notion of conscientisation, such pedagogy equips learners to question how AI systems are designed, whose interests they serve, and which voices they silence (Freire, 1970/2018). Rather than treating AI outputs as objective knowledge, students are encouraged to interrogate the conditions of their production: What data trained the model? Which perspectives are included or excluded? How do outputs shift across languages, prompts, or platforms? These practices move AI from the background of learning into an object of study.
Classroom strategies might include critically comparing AI-generated responses with peer-reviewed scholarship, generating content in multiple languages to analyse biases in representation, or unpacking the persuasive strategies embedded in synthetic media. Transparency and accountability principles from AI governance (Whittaker et al., 2018; UNESCO, 2023) can be translated into pedagogy by requiring students to reflect on the limits and politics of AI tools. In this way, AI becomes not only a means of accessing information but also a site for cultivating digital, linguistic, and epistemic literacies.
For global citizenship education, the stakes are especially high. Without critical AI pedagogy, learners risk internalising algorithmically curated framings of “global citizenship” that privilege liberal cosmopolitanism while erasing colonial histories and structural inequities (Andreotti, 2011; Benjamin, 2019). By contrast, a critical approach treats AI as a contested terrain: one where learners develop the capacity to resist manipulation, recognise power asymmetries, and imagine alternative, plural visions of global citizenship.
LLMs represent a paradox to GCE. Positively, they promise never-before-seen levels of knowledge accessibility, transcending geographical and economic borders. However, negatively, they re-entrench colonial hierarchies through epistemic exclusion, linguistic imperialism, and data extraction and will probably depoliticise world citizenship education. Unchecked, they will risk GCE becoming a Eurocentric enterprise which celebrates diversity at the surface but represses plurality at the core.
This outcome is not inevitable. Reimagining AI through the lens of critical GCE (Andreotti, 2011) means addressing the absence of marginalised knowledge, supporting multilingual infrastructures, and cultivating students’ critical capacity to interrogate technologies. The task is not simply to mitigate AI’s harms but to reframe GCE as an initiative of justice, solidarity, and empowerment. Only by confronting these dynamics of power can global education in the age of AI remain truly emancipatory.
Michael Jones is a Lecturer in Writing and Communication at Nazarbayev University in Kazakhstan. He is also currently studying for his PhD in Education and Social Justice at Lancaster University. His research focuses on the intersection of AI and higher education.