19 Wrestling with A.I.

Catherine J. Denial

Abstract

Generative AI has been sold to us at speed, promising quick resolutions to writing problems for students and demanding nimble responses from faculty. This essay suggests that instead of surrendering to a manufactured sense of urgency we take the time to fully grapple with the meaning of generative AI, and to respond to its challenges. By thinking through the ethical dimensions of AI in the classroom, and by gradually changing our assessment practices, we place humans back at the center of our common educational experiences.

Keywords: ethical considerations, pedagogy, Generative AI, digital literacy, environmental impact

 

I am sure I am not the only educator who wilted a little as they learned about ChatGPT in early 2023. After three years of pandemic instruction, in a variety of modalities, with our institutions demonstrating varying degrees of respect for public health, it felt (at best) exhausting to have circumstances demand we rethink our pedagogies once again to factor in generative AI. It was also tempting to rush to one stark choice or another–to ban ChatGPT and its ilk, or to permit it in every instance–if for no other reason than to feel some sense of clarity amid another period of rapid change. But in giving myself time to wrestle with the nuances of ed tech over the summer, I realized that I needed to give my students the same opportunity I had given myself: time. So much about generative AI has been sold to us at speed, promising quick resolutions to writing problems for students and demanding urgent responses from faculty. I wanted to slow things down, and to offer students the opportunity to weigh the pros and cons of AI use so that they could make critical, informed decisions about how it would shape their educational experience.

I made trust my starting point. Academia can easily socialize us into positions of antagonism with regard to our students–the idea that students will lie, cheat, ignore homework, and try to bargain for better grades is deeply embedded in the culture that surrounds us. But our classrooms are not packed wall-to-wall with students who have nefarious intentions, and when we design our course policies and practices to presume that they do, we communicate distrust to every student in the room. It was important to me that I settle on an approach to teaching the questions surrounding AI that did not suggest I was waiting for every student to screw up. To do so would communicate suspicion and invite suspicion back–hardly the makings of a positive or generative classroom environment.

I also committed to being transparent about the pedagogical reasoning that informed my decisions. It is possible for us to make choices about ChatGPT, for example, such as banning it completely and imposing penalties for its use, if we can explain what that policy achieves in positive terms. What conditions does that create that are useful for students? In our experience, how does that policy help students achieve their goals? We could take other positions on AI, too, but the common thread should be our ability to explain the design of our courses, and the meaningful support we will provide in developing writing skills, for example, or doing research, or figuring out equations. (This is a good rule of thumb for all our instructional choices–can we explain why they exist in terms that demonstrate engagement with our students’ needs, rather than relying on the academic equivalent of “because I said so”?).Taking the time to fully grapple with generative AI meant planning to set aside part of each class period over several days to make room for the critical thinking and discussions I wanted to facilitate. It was important to me that we wrestle with:

  1. Labor practices. There are tremendous human costs associated with the development and maintenance of AI products. “To teach Bard, Bing or ChatGPT to recognize prompts that would generate harmful materials, algorithms must be fed examples of hate speech, violence and sexual abuse,” writes Niamh Rowe in The Guardian, reporting that many Kenyan workers who reviewed such passages were left traumatized. Workers in the United States also reported being overworked and drastically underpaid for their labor, labor without which AI models would lose efficacy. (Rowe, 2023; Wong. 2023.)
  2. Environmental factors. AI needs water to generate the electricity that powers servers, and water to cool them. The ethical considerations are enormous when we consider global water shortages, climate change, and profit motives. (Sankaran, 2023.)
  3. How Large Language Models (LLMs) actually work alongside important, early critiques of that model. ChatGPT and other similar products do not generate knowledge, but instead work by means of sophisticated predictive text operations. By assigning readings that make this distinction clear, we can suggest reasonable limits to the usefulness of generative AI. (Riedl, 2023; Bjarnsaon, 2023; O’Neil 2023.)
  4. Access. Products like ChatGPT are rarely designed with disabled users in mind, meaning whatever benefits a given LLM might offer are inequitably distributed across our campuses. How does (and should) this shape their use? (Waruningi, 2023; EDF, 2022; Henneborn, 2023).
  5. Data Mining and Privacy. It’s important that students know what happens to the data that they provide to AI systems. By using a series of learning units provided by the University of Mary Washington to help students become thoughtful consumers of web content, I hoped to generate critical conversations about privacy. (Burgess, 2023; UMW, undated.)

I teach at a school with a trimester system, and during winter term, 2024, I assigned a selection of these texts to students in a 100-level history class (25 students in all).[1] After each reading, students engaged in an ungraded, online reflection about what they’d read as preparation for a class discussion on the topic. It was important that the reflections were ungraded–I did not have the time required to respond to every one individually–but also important that there was a feedback loop between the reflections and what we discussed in class. I offered the same four reflective prompts for each reading:

  1. What new things did you learn from the reading?
  2. What, from this reading, do you think it’s important we talk through as a class?
  3. What left you confused? What questions do you have?
  4. Is there anything else you’d like to add?

I then drew from students’ responses to plan each day’s class period, highlighting the broad themes upon which they had touched. We discussed generative AI as a whole class, but the exercise could easily have been turned over to small group conversation too.

The students’ reflections demonstrated that they were aware that generative AI existed, but that they were surprised to learn about the labor and environmental issues related to its use. “I have never heard about this before,” wrote Wendy. (All names have been anonymized; all students quoted gave permission for me to do so.) “I did not know anything about this topic,” wrote Emily. “Thank you for bringing this issue to my attention! I didn’t know anything about it!” wrote Dakota. “I honestly didn’t know there were human workers behind AI,” wrote Mel. “I hadn’t even thought about how there is labor behind AI. I always thought that computers put data into other computers,” wrote Ally. “I didn’t realize how many humans were behind allowing AI to function. Not to mention traumatized, underage workers. It really sucks,” wrote Alex. “Thank you for having us learn about this! I’m really glad that I know about it now,” wrote Vic.

Our conversations caused many students to reflect on the ethics of using generative AI. “I think it’s important that we go beyond the conversations about academic integrity surrounding ChatGPT to address the effects that AI is having on folks, including children, in the global south and think about why this is not a bigger part of the conversation around the ethics of AI,” wrote Dakota. Jordan concurred. “Most of the time we only talk about AI in terms of academic integrity (which is important) but this information frames it in a new way,” they wrote.. “I knew Ai was sketchy and I have major issues with it, but I didn’t even think about the people on the other side of it,” offered Rachel. “As a society I feel like we never care about what goes on behind closed doors, instead we are content with the shiny new toy and want to see what it can do, and leave the rest for someone else to worry about,” said Jean. Alex summed things up nicely. “We are asking the wrong questions: ‘Can we do this’ instead of ‘should we.’”

To supplement class discussions and personal reflections, I also had students work with the predictive text on their phones. I asked students to write the history of yesterday using only predictive text, and we shared the results to hilarious effect. The stories were, predictably, bland, vague, and very absurd, which I linked back to the assigned reading about how LLMs worked. This is what generative AI does, I suggested – it makes educated guesses about which words will come next in any given phrase. This offered a great segue into a discussion of whether generative AI can be useful with and/or without editing, and what judicious editing might look like. (Wieck, 2023.)

If a series of reflective prompts or discussions is not possible in your class because of size or issues with access to technology, there are other ways to have students process this information which are scalable.

  1. Gather student reactions to key readings through polls. By carefully crafting questions that lend themselves to multiple choice answers, we can–even in classes with hundreds of students–take the temperature of the room on issues related to generative AI and see the results in graph or pie-chart form.
  2. Have students write their own position statements on AI use. Just as we need to be transparent about our pedagogical choices, it’s incredibly useful to have students do the metacognitive work to articulate their position on generative AI. Will they use it? In what ways? To achieve what ends? If they won’t use it, what has shaped that decision? Has learning about the larger ethical framework for AI had an impact upon their thinking? Writing one version of this on the first day of class and another after discussing relevant readings is a particularly useful way of tracing growth.

Beyond engaging students in direct discussion about the ethical implications of generative AI, I also revisited what assessment might look like in the ChatGPT moment. Few of us have had the time to rethink all our assessment choices with AI in the mix—it’s become one more thing to juggle when educators at large are already tapped out. I chose not to burn down my assessment frameworks immediately or completely. Instead, I looked critically at the assignments and class periods I had and asked myself “what’s one thing I could do?” I changed some assignments to emphasize drafting and redrafting, especially with peer feedback to make my own workload manageable. I dedicated some class periods to co-working, providing accountability for students as they worked on their assignments, and an opportunity for students to check in and ask questions as I moved around the class. I also met with students one-on-one (a practice that would work in small groups, too) to talk about early drafts before a final product was turned in. If I had had TAs, I would also have considered how to support them in doing this work.

Entering into conversation with my students–through class discussion, through polls, through forms, and through their written work–about the many facets of their becoming a generative AI generation has been rewarding. Whatever time this took away from the content I would usually cover in class, we were still actively working on the critical thinking and analytical skills that I always hope will have the greatest lasting impact in a student’s life. It is those skills that will help students navigate not just this change to the educational landscape but all those that are yet to come. And as AI speeds up the pace of change in multiple areas of our lives, it feels incredibly and importantly human to insist that some of our best thinking comes when we carefully, thoughtfully slow things down.

 

Questions to Guide Reflection and Discussion

  • The text discusses the ethical considerations of using generative AI in education. What ethical dilemmas might educators and students face when integrating AI into academic practices?
  • Consider both the benefits and potential harms. Reflect on the human labor and environmental resources required to develop and maintain AI technologies. How does this awareness affect your perception of using AI in educational contexts?
  • The author mentions rethinking assessment practices in light of AI. How can educators adapt their assessment strategies to ensure that they are fair and effective in an AI-integrated classroom?
  • Considering that AI tools may not be designed with all users in mind, discuss the implications for accessibility and equity in education. How can educators ensure that the use of AI does not exacerbate existing inequalities?
  • Data privacy is a critical concern with AI technologies. How should educators and students navigate the balance between leveraging AI’s capabilities and protecting personal and academic data?
  • The author advocates for a thoughtful, rather than hasty, integration of AI into education. What does a future that incorporates AI ethically and effectively into education look like to you?
  • Reflective practices are emphasized as a way to engage with the complexities of AI. How can reflection and discussion enhance understanding and responsible use of AI among students?

 

References

Bjarnason, Baldur, “The LLMentalist Effect: How Chat-Based Large-Language-Models Replicate the Mechanisms of a Psychic’s Con,” Out of the Software Crisis. July 4, 2023, https://softwarecrisis.dev/letters/llmentalist/, accessed September 7, 2023.

Burgess, Matt, “ChatGPT has a Big Privacy Problem,” Wired. April 4, 2023, https://www.wired.com/story/italy-ban-chatgpt-privacy-gdpr/, accessed September 7, 2023.

EDF, “Accessible and Non-Discriminatory Artificial Intelligence,” European Disability Forum. May 5, 2022, https://www.edf-feph.org/accessible-and-non-discriminatory-artificial-intelligence/, accessed September 7, 2023.

Henneborn, Laurie. “Designing Generative AI to Work for People with Disabilities,” Harvard Business Review. August 18, 2023, https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities, accessed September 7, 2023.

O’Neil, Lorena, “These Women Tried to Warn Us about AI,” Rolling Stone. August 12, 2023, https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/, accessed September 7, 2023.

Riedl, Mark, “A Very Gentle Introduction to Large Language Models Without the Hype,” Medium. April 13, 2023, https://mark-riedl.medium.com/a-very-gentle-introduction-to-large-language-models-without-the-hype-5f67941fa59e, accessed September 7, 2023.

Rowe, Niamh, “It’s Destroyed Me Completely: Kenyan Moderators Decry Toll of Training of AI Models,” The Guardian. August 2, 2023, https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai, accessed September 7, 2023

Sankaran, Vishwam, “ChatGPT Centers Are Consuming a Staggering Amount of Water, Study Warns,” The Independent. April 13, 2023, https://www.independent.co.uk/tech/chatgpt-data-centre-water-consumption-b2318972.html, accessed September 7, 2023.

University of Mary Washington, “Module: Digital Privacy and Security,” Web Buildinghttps://umw.domains/module-digital-privacy-and-security/, accessed September 7, 2023.

Waruingi, Macharia, “ChatGPT not fully accessible with JAWS,” LinkedIn. March 21, 2023, https://www.linkedin.com/pulse/chat-gpt-fully-accessible-jaws-macharia-waruingi/, accessed September 7, 2023.

Wieck, Lindsey Passenger, “Revising Historical Writing Using Generative AI,” Perspectives on History. August 15, 2023, https://www.historians.org/research-and-publications/perspectives-on-history/summer-2023/revising-historical-writing-using-generative-ai-an-editorial-experiment, accessed September 7, 2023.

Wong, Matteo, “America Already has an AI Underclass,” The Atlantic. July 26, 2023, https://www.theatlantic.com/technology/archive/2023/07/ai-chatbot-human-evaluator-feedback/674805/, accessed September 7, 2023.


  1. I tested positive for Covid on the very first day of classes, and in order to accommodate my illness, I had to cut some readings from the syllabus – both those related to historical content, and those related to generative AI.

About the author

Catherine J. Denial (she/her) is the Bright Distinguished Professor of American History and Director of the Bright Institute at Knox College in Galesburg, Illinois. She is the author of -Making Marriage: Husbands, Wives, and the American State in Dakota and Ojibwe Country- (2013), multiple essays on pedagogical practice, and will publish her next book, -A Pedagogy of Kindness-, in 2024.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Catherine J. Denial is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.