13 Rude Reflections: Current AI Acts as a Mirror of Our Flawed Society

Belinda 'Ofakihevahanoa Fotu

Abstract 

In face-to-face classrooms, culturally and critically conscious teachers fight to mediate skills, content, and students’ backgrounds all within the bounds of a systematically flawed educational institution. For now, AIs that aid in writing bypass that kind of meaningful work as AIs reflect the corpus of published literature that overwhelmingly reflects historically problematic hierarchical biases.

Keywords: AI ethics, Lord of the Flies, secondary education, reproducing racism, autoethnography

 

The organic nature of a face-to-face classroom has so many immediate benefits, including adjusting curriculum to better benefit the context of students in a specific learning space. We can read books like Lord of the Flies and adjust the knowledge construction of the text to better benefit a class of diverse learners. This is important because William Golding was pretty explicitly racist and Eurocentric in his renowned novel – but under a culturally sustaining pedagogical framework that reifies the validity of students in their own cultures, Golding’s failings can be pointed out honestly and contextualized and then pointedly rejected as a class community. This works really well in a face-to-face classroom where knowledge creation is shared amongst students who simultaneously physically and emotionally get a sense for each other’s space and can more easily empathize with each other when they are only desks away from each other.

When Chat-GPT came out in November 2022, there were three or four students in the high school English courses I teach who in a crunch used these writing aids to carry their term papers rather than building from the work they had done directly after the discussions every other class period. AIs pull from the aggregated writings that already exist on the internet. Anything that was ever written on Golding’s Lord of the Flies was gathered and the literary mean or average of that combination was presented by the AI to those several students as a quick way of completing their end-unit-assignment. While this is obviously problematic for learning, another layer was specifically obvious upon my reading of these AI generated essays: they were nauseatingly racist.

The congregated works of earlier scholars, people whose racial and economic positionalities gave them the best access to academia in the past, is riddled with problematic Eurocentric paradigms. To say that Indigenous peoples were disparaged by these past scholars would be an understatement – and the publications of these scholars are what populate the cesspool from whence AI pulls its ideas and phrasing. AI is essentially a reflection of who we have let have a voice in the past and that voice was built from and still has large contributions from systems of hierarchy that use academia to further justify their positionality of power and stratification.

I remember pulling up a couple of the students separately during a quiet writing time in class and reading to them from their obviously AI generated papers the more than problematic phrases that disparaged and demeaned Indigenous cultures. The students, all well-meaning young men who had not read through their AI generated essays, stared in horror and stumbled through explanations and apologies.

In class, we had been very specific about the violence Golding committed towards Indigenous cultures in his word and idea choices in his novel and yet here in their final essays, they had unwittingly let AI amplify messages about that same kind of bigotry.

In all three conversations, there was no need for a lengthy discussion or explanation for why these essays would not receive points. The young men were stunned and embarrassed by what they had apparently submitted. I offered that they could choose to rewrite for the possibility of full points, and they all accepted it readily.

Racism is intricately tied to the tools of AI because AI merely reflects what voices have been historically validated and those stratifications are riddled with implications and consequences of racial and cultural hierarchies.

 

Questions to Guide Reflection and Discussion

  • How does the use of AI in academic settings reflect and potentially perpetuate existing societal biases?
  • Reflect on the ethical responsibilities of educators in correcting AI-generated errors that contain biases. How can they educate students about these issues?
  • Consider the broader implications of AI’s mirror effect on society. What measures can be implemented to ensure a more equitable AI?


About the author

Belinda ‘Ofakihevahanoa Fotu (‘Ofa) has been a high school English teacher for twelve years, primarily teaching sophomore students Language Arts and senior students College Writing for their UVU G.E. college credit. ‘Ofa has her B.A. in English-Teaching from BYU and an M.Ed. from SUU where her thesis research was on building critical consciousness in writing curriculum to improve student writing skills. She is currently working on her dissertation for a Ph.D. in Education from USU in the concentration of Cultural Studies. Her research centers the Tongan American diaspora and generational cultural knowledge transfer.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Utah State University is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.