Tackling Generative AI's Bias Through a Global Lens
- Rick Grammatica
- Jul 24
- 8 min read
Updated: Jul 25
As GenAI tools become embedded in education, we need to look closely at the cultural assumptions they carry, and whose knowledge they amplify or ignore. This means the examples, assumptions and sources built into many AI outputs tend to reflect a narrow cultural lens, marginalising other ways of knowing. In a recent BERA article co-authored with Richard Holme, we explored how this affects academic research and integrity. Here, I critically examine how GenAI’s bias impacts students and offer practical strategies to help you recognise and address bias in curriculum and course design.

Image source: Unsplash (by Christine Roy)
The Western-centric lens of GenAI
A large portion of today’s GenAI has been trained on data produced by Western, English-speaking authors, institutions and platforms. This heavily skews the outputs towards Euro-American norms, whether in the choice of examples, the framing of concepts, or the knowledge and perspectives that are prioritised. Even when GenAI tools appear to “speak” other languages, they are still largely thinking in English. As Argentine AI researcher Luciana Benotti discusses, most publicly available models align with “white, college-educated, native English speakers from the Northern Hemisphere” (Otero, 2024). This has implications for both the visibility of diverse knowledge and the credibility of the AI’s responses in today’s global classrooms.
Language bias is just one part of a broader cultural framing, though. Bali (2024a) shares a compelling example. When asked to draw a “hospital,” the QuickDraw tool suggested a cross symbol, even in its Arabic version, ignoring the Red Crescent symbol widely used across the Arab world. Such outputs signal a deeper issue with the assumptions built into these tools and what is seen as ‘default’ or ‘universal’.
This issue is not limited to GenAI, though. Bias is already embedded in many aspects of higher education. Multiple studies (e.g. Arday, Belluigi and Thomas, 2020; Day et al., 2022) have shown that curricula across disciplines often centre white, male, Euro-American theorists and authors, making it harder for other voices to be heard . This reflects long-standing epistemic hierarchies, where certain ways of knowing (such as Indigenous or global south knowledge systems) are systematically underrepresented or dismissed. Scholars like de Sousa Santos (2014) have long argued that these hierarchies are the legacy of colonial knowledge regimes that continue to shape whose knowledge is valued in education. In this context, GenAI doesn't introduce a new problem, rather it risks amplifying an old one by reproducing those same patterns at speed and scale. These positionalities, both ours and those embedded in our tools, have real consequences. When students don’t see their cultures, histories or intellectual traditions reflected in course materials or AI-generated content, it can affect both engagement and belonging.
Impact on students
When GenAI systems are used in educational settings, their limited knowledge of local figures, events or literature beyond the Anglosphere can lead to omissions, inaccuracies, or distortions. Maha Bali (2024a) describes how ChatGPT's responses to questions about Israel and Palestine show different levels of balance and positionality. In another case, Bali (2024a) found that a GenAI tool offered an image of the American boxer Muhammad Ali when asked about Muhammad Ali Pasha, the 19th-century Egyptian leader. These kinds of errors suggest to students that AI, and by extension, the curriculum or institution using it, doesn’t fully ‘see’ or understand their contexts.
This lack of recognition can influence how students perceive both the content and their place within it. But this is not only an AI issue. A UK business school audit found that “almost every author on our reading lists was from the global north,” prompting a deliberate intervention to diversify the curriculum (The University of Manchester: Alliance Manchester Business School, 2024). Without such efforts, dominant voices and perspectives, whether in course materials, institutional policies or the technologies we adopt, remain unchallenged and continue to shape what counts as legitimate knowledge.
Students themselves may be more attuned to these gaps than their tutors or institutions. In Jisc’s 2025 survey of UK higher education learners, many voiced concerns about bias in AI tools, warning that they could “embed prejudices and stereotypes” and subtly skew research if used without care. Some expressed fears that over-reliance on GenAI would “reinforce conventional thinking” and flatten out intellectual diversity. These concerns are a clear call to action.
Strategies to minimise GenAI bias
Addressing bias should be part of a wider effort to design learning environments that are inclusive, critical and representative of diverse ways of knowing and thinking. Here are a few ways you can start adopting this approach.
Teach critical AI literacy
Just as we teach students to evaluate sources or question assumptions in academic texts, we also need to help them critically engage with GenAI outputs. This means demystifying how tools like ChatGPT work, where their data comes from, and whose voices might be missing. For example, you might explain that GenAI doesn’t “know” facts, it generates content based on its training data. You could show real examples of bias (both with and without AI) to help students recognise that both they and GenAI tools are not neutral. Building this kind of critical AI literacy encourages learners to question, challenge and supplement what GenAI offers rather than accepting it at face value.
Helping students recognise bias in GenAI can also be a starting point for wider critical reflection. If we ask them to question AI outputs, why not also invite them to question the assumptions in our reading lists, theories, and teaching practices? GenAI might actually offer a less intimidating starting point for critique, especially for students who are hesitant to challenge authority in class. Embedding a more open and critical dialogue into your courses can create space for students to critically engage not only with technology, but with the structures of knowledge that shape their education.
Diversify reading lists and resources
One practical way to address bias in educational design is to review and diversify the reading lists, case studies and resources that underpin your course. This includes considering whose voices, knowledge systems and experiences are represented - and whose are missing. After auditing their curriculum and finding that almost every author on their reading lists was from the global north, the University of Manchester’s Alliance Business School introduced a target to ensure 30% of authors are from diverse ethnic backgrounds and that half of the lists mentioned are made up of female authors. Bird (2022) outlines practical steps for reviewing and rebalancing course materials, including:
Using a framework such as the Decolonising SOAS Working Group (2018)
Involving both staff and students in reviewing lists to critically consider appropriate diversification
Reflecting carefully on ethical considerations, such as how authors’ identities are interpreted or represented.
GenAI can support this work when used critically. Rather than relying on the tool’s default responses, educators can prompt it deliberately to surface underrepresented voices or region-specific scholarship. For example: “Suggest key thinkers on environmental justice from Latin America” or “List influential female economists from a variety of cultures.” Using GenAI’s web search or research capabilities can also help you find current, verifiable sources that don’t just rely on its training data.
Still, there are legitimate concerns about whether these tools should be used at all in some educational settings, given how they rely on large-scale data harvesting, consume significant energy, and risk reproducing systemic inequalities. Some models are now being developed with a more global and multilingual focus (such as the open-source BLOOM model, trained in 46 languages) which may represent a small but important step toward more equitable and representative AI. Used in a more deliberate way, GenAI can assist with identifying new material, translating non-English sources, and broadening the pool of case studies and scholarly perspectives students encounter.
Embed local and multiple contexts
When using GenAI to create teaching materials, such as examples, quiz questions or case studies, it’s important to be intentional about context. Rather than accepting default responses, try specifying the cultural or regional lens you want it to use, e.g., “Write a marketing case study set in an Asian business environment." This approach guides GenAI to draw from a more diverse range of scenarios, which it may not do automatically. Always review outputs critically, and consider asking the tool to explain or justify its choices. This not only helps minimise bias but models for students how to approach AI outputs more reflectively.
Build dialogue with students
Perhaps the most powerful way to address bias in GenAI, and in education more broadly, is to create open dialogue with students. Rather than positioning AI policy or classroom use as something handed down, educators can co-create norms and practices with their learners. The 2025 Jisc report found that students don’t just want to be told how AI will be used; they want to help shape those decisions, with many saying they would welcome opportunities to co-design modules involving AI. This could start with a simple discussion at the beginning of term: what experiences have students had with GenAI? Have they noticed cultural or gender bias in the responses? How do they feel it could support or undermine their learning?
Going further, students can be involved in auditing materials or testing AI tools across different cultural and linguistic contexts, helping to surface blind spots that educators may not see. These conversations build critical digital literacy, but they also validate students’ lived experience and cultural knowledge. As Bali (2024b) argues, learners should play an active role in shaping how GenAI is used in the classroom - making space to question dominant narratives as part of the learning process. This kind of collaboration models inclusive practice by reinforcing that identifying bias and expanding perspectives is a shared responsibility, not the educator’s alone.
Towards a More Inclusive AI-Era Education
GenAI is not a neutral tool; it reflects and reproduces the cultural, linguistic and epistemological assumptions embedded in its training data. Rather than merely viewing these biases as technical flaws, educators should examine how such tools intersect with the structural inequalities already present in higher education. Addressing the positionality of GenAI prompts us to reflect more critically on our own practices: whose knowledge is centred, which perspectives are excluded, and how students are positioned within these dynamics. By designing more inclusive courses and working in partnership with students, we can foster learning environments that engage with AI more critically and constructively. At an institutional level, these practices can be scaled through curriculum audits, teaching enhancement initiatives and staff development programmes that embed inclusive, critical uses of AI across educational design. With this approach, GenAI can support deeper reflection on equity, representation and the reshaping of knowledge in contemporary education.
Reference list
Arday, J., Belluigi, D., & Thomas, D. (2020). Attempting to break the chain: reimagining inclusive pedagogy and decolonising the curriculum within the academy. Educational Philosophy and Theory, 53(3), 298–313. https://doi.org/10.1080/00131857.2020.1773257
Bali, M. (2024a). Where are the crescents in AI? LSE Higher Education Blog. https://blogs.lse.ac.uk/highereducation/2024/02/26/where-are-the-crescents-in-ai/
Bali, M. (2024b). A compassionate approach to AI in education. Knowledge Maze. https://knowledgemaze.wordpress.com/2024/04/29/a-compassionate-approach-to-ai-in-education/
BigScience. (n.d.). Introducing The World’s Largest Open Multilingual Language Model: BLOOM. https://bigscience.huggingface.co/blog/bloom
Bird, K. (2022). How ‘diverse’ is your reading list? Tools, tips, and challenges. In A. Day, L. Lee, D. S. P. Thomas, & J. V. Spickard (Eds.), Diversity, inclusion, and decolonization: Practical tools for improving teaching, research, and scholarship (pp. 97–109). Bristol University Press.
Day, A., Lee, L., Thomas, D. S. P., & Spickard, J. V. (Eds.). (2022). Diversity, inclusion, and decolonization: Practical tools for improving teaching, research, and scholarship. Bristol University Press.
de Sousa Santos, B. (2014). Epistemologies of the South: Justice against epistemicide. Routledge.
Decolonising SOAS Working Group. (2018). Decolonising SOAS learning and teaching toolkit for programme and module convenors. SOAS University of London. https://blogs.soas.ac.uk/decolonisingsoas/files/2018/10/Decolonising-SOAS-Learning-and-Teaching-Toolkit-AB.pdf
Holme, R. & Grammatica, R. (2024). Generative AI in the academy: Balancing innovation with integrity in research: https://www.bera.ac.uk/blog/generative-ai-in-the-academy-balancing-innovation-with-integrity-in-research
Jisc (2025). Student perceptions of AI 2025. https://www.jisc.ac.uk/reports/student-perceptions-of-ai-2025
Otero, M. (2024). Luciana Benotti, computational linguistics expert: ‘Data extraction for AI is a new form of colonization’. El País. https://english.elpais.com/technology/2024-01-25/luciana-benotti-computational-linguistics-expert-data-extraction-for-ai-is-a-new-form-of-colonization.html
The University of Manchester: Alliance Manchester Business School (2024). Improving the diversity and inclusion of reading lists. https://www.alliancembs.manchester.ac.uk/news/improving-the-diversity-and-inclusion-of-reading-lists/
Author positionality statement
I write this piece as a white British male working in higher education, though my thinking is also shaped by experiences of living and working in diverse international contexts. I recognise that I speak from a position of relative privilege, and I aim to contribute to ongoing conversations of making educational design and AI implementation more equitable, inclusive and critically informed.
ChatGPT was used to help write parts of this article and to edit the first draft. A ‘human in the loop’ approach was adopted, where I used ideas from extensive prior reading around the topic and any AI-generated citations were reviewed.
Comments