8 AI and Writing Classrooms: A Study of Purposeful Use and Student Responses to the Technology
Laura Dumin
Abstract
Given that AI is here for the foreseeable future, I revamped my spring 2023 English courses to include using AI in purposeful ways. I submitted an IRB to test different approaches to integrating AI into my classrooms. The study asked students in three different English courses to complete pre- and post-reflections about their understanding of AI and its uses.
This chapter will discuss how my students responded to the use of AI in the writing classroom, how student attitudes toward AI shifted over the course of the semester, ideas for how to use AI in writing classrooms, and the main takeaways from the study.
Keywords: large language models, assignment shifts, AI literacy, AI detectors
At the end of the fall 2022 semester, when ChatGPT seemed to come out of nowhere, I was among the writing instructors who felt panic and anger at the sudden shift in what students could get technology to do for them. I sat in that space for a few weeks before reading a comment from a colleague who questioned if there were ways for us to use, rather than fear, this new AI tool. From that point, I shifted from a position of fear to a position of questioning and learning. That led me to run an IRB-approved study (#2023-003) in my three English courses in spring 2023, where I focused on purposeful discussions of AI and gave guidelines for when/where AI could be used in each of the assignments. I also came up with a syllabus statement that laid out the basics for how AI use would be viewed in my classroom. I wanted to spend time learning how students were approaching AI and how I might incorporate AI into my classroom in some meaningful ways. This seemed to be an important starting point for understanding how AI would change our classrooms.
In early spring 2023, a piece by Owen Terry in The Chronicle of Higher Ed looked at how students were actually using AI. Terry noted that students were asking these large language models (LLMs) to walk them through “the writing process step by step.” The Daily podcast episode “Suspicion, Cheating, and Bans: A.I. Hits American Schools” (June 28, 2023) discussed the fears of one professor about how students were using AI in his classrooms. Both pieces touched on the fears of many instructors—that students would offload writing assignments to the LLMs instead of doing the work themselves. Interestingly, these fears were not realized in my three courses. So what did happen? I’ll spend the rest of the chapter exploring that question, along with where students started and ended the semester in their opinions of AI, how I taught about and encouraged students to use AI in their assignments, and finish with some takeaways and next steps for instructors.
Methodology
I started with a basic plan in spring 2023: students completed pre- and post-reflections to assess what they knew about AI and how they felt about it at the beginning of the semester and then again at the end of the semester. I was teaching three very different courses: 1) First-year composition and research (face-to-face); 2) An upper-division course about the history of scientific rhetoric (face-to-face); and 3) An upper-division introduction to technical writing course (asynchronous online). I intended to talk to my face-to-face classes about AI in some meaningful (but at that time, undefined) way and then evaluate how their ideas about AI use shifted (or not) over the course of the semester. I added a syllabus statement about AI and made a few tweaks to my early assignment sheets; then I figured that the semester would go where it went. Not entirely scientific, but I didn’t want to hem myself in with a technology that none of us knew much about.
All my demonstrations to students were in free versions of AI programs. Students were allowed to use paid programs if they chose, but most students appeared to be using the free versions.
Assignments
In each of the courses, I added AI literacy and AI guidelines that made sense for the assignment and for the course itself. AI use is addressed at each stage of the assignment so that students can have a clear understanding of when it is acceptable, or even potentially beneficial, for them to use AI in their writing.
First-year Composition
In our first-year writing program, the second semester focuses more heavily on research, allowing for some fun ways to approach adding AI. For their first paper, which was a definitional argument, I asked the class to think about how AI had changed a definition of something. I gave the example of customer service and how we once expected to chat with a person, but now we usually assume we are chatting with a bot until our problem gets to a point where we need human intervention. After that first paper, the topics were broader. Students could continue to focus on AI use if they wanted to, or they could look at other research topics and integrate AI into their process in some defined ways.
Students also were asked to get peer feedback, instructor feedback, AND ChatGPT feedback on their rough drafts. Then I had them reflect on how the AI feedback compared to the human feedback.
History of Scientific Rhetoric
The History of Scientific Rhetoric is a course geared toward technical writing and composition/rhetoric majors. There is a lot of heavy theory to read, and students complete a large rhetorical analysis of an author and their use of a specific rhetorical or linguistic feature in their body of writing. For this project, I let students use AI in whatever ways worked for them as long as they were transparent about it. We spent a lot of time in class discussing where AI had and hadn’t worked, allowing them to share their experiences and learn from their interactions with the different programs. Students also had reading responses throughout the semester that were meant to be completed on their own, without the help of AI.
Technical Writing
The Technical Writing course was a little bit different since it was an asynchronous online course. Since I couldn’t have deeper conversations about AI with them on a regular basis, I had to focus more on student reflection of their experiences. In the first group project, each group was assigned to research part of the job-finding process, such as “Likely Interview Questions” and “What to Wear for an Interview.” Groups were asked to write a short paper with their findings and then develop a handout for the class on their main takeaways. Up to this point, AI was not involved in the assignment. After everyone had submitted their handouts, I switched people into different groups and asked them to have ChatGPT make a handout for the new topic. I then asked students to compare the ChatGPT version to the student-made version to see which one they liked better and why.
In their second group project, students had to go to a business and discuss how accessible the space that business was housed in was, as well as give recommendations for increasing accessibility. Students had more freedom to use AI as they saw fit here. After this paper, there was also a reflection piece asking about their use of AI and how useful they felt the programs were.
AI Guidelines
Each of my assignment sheets includes a set of guidelines for how and where AI might be used. Here is an example set of guidelines for a first-semester English Composition course where the main goal is to get students thinking about what kinds of writing they do, who the audience is for each type of writing, and how they approach the writing tasks (Figure 1).
Figure 1: AI Assignment Guidelines
Where can I use AI? There are a few places where it might make sense to use AI or generative AI this semester. Brainstorming: If you are stuck on what to write about, you can ask something like ChatGPT about a good topic. You can talk with the program and refine your ideas. o Must let me know that you used AI and reflect on the conversation that you had about the topics. Drafting: Stuck on drafting? Feel free to prompt one of the AI programs for some ideas. Having said that, up to 40% of your draft may be AI generated and should be colored red. o You will need to research everything that the AI gives you, even if you use something like Bing or Bard for sources. They aren’t necessarily giving you factual information or real sources. o Use APA citations for the AI-generated content. Peer reviews: AI critique is part of the peer review and feedback process. So you can feed your own assignment into a program. DO NOT FEED ANYONE ELSE’S WORK INTO AN AI PROGRAM. The feedback that you give needs to be your own. Final draft: Less AI makes sense here. At this point, you might keep some of what the AI gave you in the draft, but most of the work should be yours. Up to 15% of the final draft can include AI and should be colored red. Memo: Nope. This should be your writing. You will be reflecting on the AI programs that you used and where they were or were not helpful. But that information will still be written by you. |
Discussion
As the semester wore on, I felt like I had more control in the face-to-face courses, as we were able to have many great discussions about what different AI programs could and couldn’t do. Students were engaged in these discussions and seemed to give good thought to their reflection pieces where they discussed their use of AI. I felt like I had less control in my online course, partly because I don’t usually teach technical writing online and partly because I didn’t get to have those AI show-and-tell moments with them with their projects. Students in the online course also may not have had the same level of interest in AI as a learning tool because I was not as able to share my enthusiasm with them.
One of the nice things about this approach is that I was able to focus on AI literacy and usage guidelines rather than on cheating with AI. We know that detectors don’t work well and that it is very easy for tech-savvy students to fool them (Bauschard, 2023). Terry (2023) notes “that it’s simply impossible to catch students using [AI], and that for them, writing is no longer much of an exercise in thinking.” There was also the unfortunate story of the Texas A&M professor who asked ChatGPT if his students had used it to write their papers…and then he failed the students because he took ChatGPT’s word as truth (Klee, 2023).
We know that, even though there have been improvements on both sides of the “fight” since January 2023, detectors still have a problematic rate of false positives (Tangermann, 2023). Case in point, OpenAI removed their AI detector in July 2023, citing its low level of accuracy (Dreibelbis, 2023). We have also learned that humans are bad at distinguishing between human-written or AI-written text (Jannai et a l., 2023).
The idea that instructors can or should focus mainly on the negative aspects of student AI use goes against what I want to practice in the classroom. At the end of the day, I signed up to teach college writing, not police student choices about how they complete their essays or try to decide if a student really wrote an assignment. Let me also be clear that I am not advocating for us to stand back and ignore when students make obviously poor choices regarding assignment completion—but I think that we can still focus on what we write, why we write, and how to write well, while also allowing students to use AI programs throughout the process where it makes sense to do so. I also think that process becomes as important as, if not more than, product, as we focus on how to get students to produce the desired result in ways that work for our classrooms.
AI Feedback, Reflection, and Activities
Students responded well to the reflection activities, spending time thinking about AI within their assignments. At the beginning of the semester, I received comments likening AI reflection to sci-fi or noting how fast and helpful the AI reflection on their drafts was. Most students noted that the response from ChatGPT was similar to what I or their peers had said, with some students noting that ChatGPT gave slightly more detailed feedback on certain parts of their essay, such as wordiness or fixing grammar.
Students seemed to become both more comfortable and possibly a bit bored with the AI feedback by the end of the semester. But students were more comfortable with using AI for specific tasks, such as research. They understood that LLMs especially have a habit of giving problematic results (i.e., hallucinating), so they were aware that they needed to double-check any AI output. Students also indicated a good understanding of the difference between having AI augment the work along the way and having AI do the work for you. Some students noted that AI could give them more ideas than they started with, thereby expanding the depth of their paper discussions. I liken that to having a conversation with a friend and realizing a point that you might not have thought of before.
My upper division students were more likely to note where they felt like generative AI could be a paper-writing timesaver, but there was also an understanding that Googling might be just as effective, since you have to double-check the results from ChatGPT. They also had more ideas about using AI in the workplace and saw that as a probable way of writing in the future.
A few students noted that they weren’t fans of using generative AI for writing tasks and I can relate, honestly. For tasks such as writing a paper, where I need to know what I think about something, I need to be the one to write the document. I can ask ChatGPT for critique and feedback, but only after I have done the early work. However, if I am writing a business document where I am not necessarily learning anything through the document (such as a cover letter for a job), having ChatGPT write it first and then have me edit it saves time because I’m not spinning my wheels to get started.
For other students, having an AI helper at the beginning of a writing assignment can take away the stress of coming up with a topic. One of my History of Rhetoric students wanted to write about a climate scientist but didn’t know which one; she spent weeks working on a topic on her own with little luck. This student used ChatGPT to gather a short list of possible candidates, then they researched each person to learn 1) if they were real, and 2) if they had done the kinds of writing that the student wanted to analyze. For this part of the assignment, the student felt like ChatGPT was extremely helpful and saved them hours of Googling and library searches.
My first-year composition students seemed to find generative AI to be more helpful in the drafting stages than my 4000-level technical writing students did. A few upper division students noted that ChatGPT (using GPT-3.5) had a bad habit of changing the subject or losing the thread of the conversation. This made it much less helpful for longer, more in-depth papers. Whereas, for first-year students, their papers tended not to be as long or as deep, meaning that ChatGPT was better able to help with drafting. Some of this issue may have been fixed with the GPT-3.5 update to GPT-3.5-Turbo (Monge, 2023), and I anticipate that this issue will become less of a problem with future updates.
Note that our student body has many non-traditional and first-generation students. While there are paid versions of many AI programs available, most of our students are likely to stay with the free versions for now because the added cost is prohibitive.
Possible AI Uses for Instructors
Keeping those differences in mind about how students at different course levels might choose to use LLMs, how could AI aid our writing classrooms? How many of us suggest that students go to the tutoring center for help with their assignments? And then, how many of our students actually go? There is a time factor, possibly a travel factor, and still a bit of a stigma in some places to going to the tutoring center. In some cases, institutions don’t have writing centers or tutors available. But if ChatGPT can give students viable feedback on their essays, it could be a helpful tool to support instructor rough draft feedback. The LLMs can help students who missed peer review or who didn’t get much feedback from their peers. In this way, it can be a stand-in for peers when something goes amiss in the feedback process.
Students can also use ChatGPT to better understand Western bias. For example, think about an employee handbook at your workplace. Think about what clothes and hairstyles are considered acceptable in that handbook; what about behaviors and ways that people are expected to relate to each other? Some of these things are deeply rooted in the culture from which they were written (for example, condemning Black hairstyles or saying that women showing their shoulders at work is unacceptable). To think more deeply about these biases, students could ask one (or more) of the LLMs to write some employee guidelines and then they could compare them to the ones in their workplace handbook. They could also ask the LLMs to critique their current workplace handbooks to see where biases might pop up. Students can reflect on current handbook wording and ChatGPT wording, observing where Western culture and bias might pop up in both expected and unexpected ways.
Keeping with biased output for a moment, students can also use visual AI programs such as Midjourney to develop graphics. Then students could look at the output and compare it to reality. In this way, students can start to understand more about how Western culture views certain words such as “doctor” or groups of people such as religious or ethnic groups.
Having students reflect on AI output seems to be valuable and is a piece that I will keep. For assignments such as brainstorming and drafting, students can begin to note where the LLMs were helpful and where they may have been more of a hindrance to finishing the project quickly and well. For feedback, students can reflect both on what the AI says versus what their peers and instructors say, as well as which AI programs make sense to use and where it makes sense to use them. By having students think about what multiple audiences said about their papers, students can more clearly start to observe themes in the feedback. Students may also be more willing to accept negative feedback from the AI as compared to a human, interpreting that feedback as less emotionally loaded. And as students reflect on which programs helped them and which ones were flashy but not as helpful, they can gain a better understanding of when it makes sense to deploy certain tools.
Main Takeaways
My main takeaways from the study are as follows:
- AI literacy works, and we all need to be doing something where we talk about AI, both the good and the bad, in our classes.
- Guidelines for AI use are necessary and helpful to clarify expectations.
- Talking about large language models (LLMs) makes sense. Helping students to understand that ChatGPT and other LLMs are just very large predictive text models helps students to understand the power (and lack thereof) of these programs.
- Talking about AI output and critical thinking works. Showing what the LLMs can give us and then using critical thinking skills to dissect the output as a class is a helpful exercise. This should happen regularly throughout the semester to drive this point home, and to allow for updates and shifts in the programs.
- Students in my classes saw AI for what it is and can be—a tool—instead of a way to cheat. In fact, many of them were annoyed at/bothered by the idea of using AI as the sole way to complete an assignment.
- Students appreciated that we took the time to dive into AI and use it in different ways. They are thankful for the knowledge about what AI can and can’t do well.
Future Changes
Thanks to what I have done so far with my students, the groups that I participate in online, and the workshops that I have been running for faculty, I have learned more about what the next few semesters might hold regarding how LLMs might disrupt education. One of the assignments that I started was to have students turn in annotated PDF copies of each source they cite in their papers. These annotations will be less formal than an annotated bibliography. Instead, I want students to highlight concepts or quotes that stood out to them for their papers. Then I want students to write a few sentences about how those concepts influenced their paper direction or why they used those specific quotations. This takes us back to good research skills and helps students understand the process of writing. It also allows me to focus on the process as much as, if not more than, the final product.
My hope is that I can find a balance between having students put together papers and helping students recognize where AI can be useful in the process. I also hope to be able to de-center grades even more than I already have. I want students to have room to try things out and even fail along the way without hurting their GPA. When students have room to experiment, I have found that they often will, and this can lead to stronger learning outcomes because students are more willing to engage with the material.
Conclusion
As we think about where we want to take our courses and what we want to teach our students, we must acknowledge that AI is here for now and we must be willing to engage with students where they are at with these technologies. I think it’s a good reminder to get back to some of the basics in our classes, keeping in mind the following:
- Adding AI literacy and critical thinking skills. These two go together for me. We show students what AI can and can’t do to help their writing, we ask them to reflect on AI output, and we ask them to critically interrogate that output. If Abe Lincoln is talking about the internet, that should be a red flag. Helping students learn where the AI is likely to have poor output is all part of the process.
- Returning to research skills. There are numerous ways to teach research, but I’m going to advocate for something like how I’m structuring things such as requiring PDFs of all sources, as mentioned above.
- Having clear departmental policies for writing programs and AI use. This may require some discussions to ensure that the needs of most instructors are being met, but it is worth the time to do this.
- Adding training to departmental back to school meetings at the beginning of each semester. This allows instructors to learn what has changed since they last engaged with AI in a deep way and allows instructors time to work with the AI as they feel comfortable doing so.
By focusing on what instructors and students want and need to know about AI use, programs can find strong pathways forward for AI engagement without giving up on rigor and writing.
Questions to Guide Reflection and Discussion
- How does incorporating AI in writing assignments influence students’ writing processes and their understanding of AI’s capabilities and limitations?
- Discuss the impact of AI on students’ perceptions of writing and research. How do attitudes shift from the beginning to the end of a semester?
- Reflect on the ethical considerations and classroom policies for AI use in academic writing. What guidelines could foster responsible use?
- Explore the differences in AI engagement and its perceived benefits across various levels of writing courses (first-year composition vs. upper-division courses).
- How can AI literacy and critical thinking activities be integrated into writing curricula to enhance students’ analytical skills and ethical awareness?
References
Bauschard, S. (2023, Aug 20). AI writing detectors are not reliable and often generate discriminatory false positives. Substack. https://stefanbauschard.substack.com/p/ai-writing-detectors-are-not-reliable?fbclid=IwAR0EqKbN8GfzZnQtq56dt11aScXXlmd4KwbfigfUJsFM9wWbsn7H_b52z4g
Driebelbis, E. (2023, July 25). OpenAI quietly shuts down AI text-detection tool over inaccuracies. PC Magazine. https://www.pcmag.com/news/openai-quietly-shuts-down-ai-text-detection-tool-over-inaccuracies
Dumin, L. (2023, June 19). AI creative learning project. Medium. https://medium.com/@ldumin157/ai-creative-learning-project-1247a0a79358
Jannai, D., Meron, A., Lenz, B., Levine, Y., & Shoham, Y. (2023). Human or not? A gamified approach to the Turning Test. [preprint]. https://arxiv.org/pdf/2305.20010.pdf
Klee, M. (2023, May 17). Professor flunks all his students after ChatGPT falsely claims it wrote their papers. Rolling Stone. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
Monge, J. C. (2023, June 13). OpenAI GPT-4 and GPT-3.5-Turbo updates: Cheaper and larger context. Medium. https://generativeai.pub/openai-gpt-4-and-gpt-3-5-turbo-updates-cheaper-and-larger-context-2151facf323
Tan, S. (Audio Producer). (2023, June 28). Suspicion, cheating, and bans: A.I. hits America’s schools [Audio podcast episode]. In The Daily. New York Times. https://www.nytimes.com/2023/06/28/podcasts/the-daily/ai-chat-gpt-schools.html
Tangermann, V. (2023, Jan 9). There’s a problem with that app that detects GPT-written text: It’s not very accurate. Futurism. https://futurism.com/gptzero-accuracy
Terry, O. K. (2023, May 12). I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Ed. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt?fbclid=IwAR146n2b6GVXN56oWSijsY9n1y8xIsMhBjt50i72fuyRFijYWQED6CdpWIQ