21 My Summer with ChatGPT

Mary Lourdes Silva

Abstract

Out of necessity, I first used ChatGPT to process mentally and emotionally two major injuries. Trapped at home for the entire summer, I learned to write with ChatGPT to complete a large-scale research project. The anthropomorphizing experience left me feeling less alone. For nearly a year, news about AI-generated writing software sparked nationwide concern about the future of traditional essays. As an early adopter of most digital technologies and the type of person who welcomes chaos, I chose to learn everything I could about ChatGPT, which meant I needed to learn how to “cheat” with ChatGPT. Inspired by similar assignments that ask students to “cheat,” I developed a 3-part assignment that asks students to think critically about the linguistic, rhetorical, and discursive patterns of ChatGPT writing. Moreover, the assignment challenges students to think critically about authorship, revision, academic honesty, and the ethical use of AI throughout the writing process. The chapter concludes by posing questions about the role of ChatGPT in higher education, urging educators to both reevaluate their writing curriculum and better prepare students to use emerging writing tools for a digital multimodal linguistic future.

Keywords: personal narrative, generative AI, assignment, cheating

I learned about AI writing software in early December of 2022 with the release of The Atlantic article, “The College Essay Is Dead” (Marche, 2022). I first killed the college essay over 10 years ago, integrating a writing about writing approach in my first-year writing courses (Wardle & Downs, 2014). What I didn’t realize was that there was a technology on the horizon that was about to disrupt everything we know about computer-assisted writing and the writing process.

Since the public release of ChatGPT in late 2022, professors have fallen into three general camps: 1) AI as a Digital Learning Tool (Opportunity), 2) AI as an Opportunity to Revisit Our Curriculum (Opportunity), and 3) AI as a Tool for Cheating (Plagiarism). Educators on Team Plagiarism have updated their syllabi and plagiarism policies to warn or frighten students from using the new software. Much of the apprehension surrounding ChatGPT mirrors the skepticism educators once had for Wikipedia. For years, K-16 educators banned students from using Wikipedia to conduct research because anyone with or without authority or credentials could troll any wiki page to spread false information. Today, Wikipedia is recommended by many educators and librarians as a starting point for research (Hilles, 2014; Murley, 2008; Tardy, 2010). Team Opportunity has been educating faculty for years about transforming their curriculum to educate students about the politics of authorship, citation practices, and plagiarism (Howard & Robillard, 2008). And last, Team Technology often consists of early adopters who have embraced disruptor technologies like Wikipedia and ChatGPT. This past summer, I decided to jump right into the deep end and spend my summer writing with ChatGPT so that I could develop a curricular unit on the topic for my fall first-year writing course. In this narrative, I briefly share my introduction to ChatGPT as a user and writer. Thereafter, I describe the curricular unit that I planned for my students, which asks students to explore questions, such as “What is authentic authorship? What does it mean to cheat using ChatGPT?”

In June 2023, when I first told writing colleagues that I started using ChatGPT, they either chuckled, believing it was a joke, or rolled their eyes, please, not you too? I actually started to use ChatGPT out of necessity. During Memorial Day weekend in Los Angeles, I tore my left calf muscle showing off my tango dance moves on the beach. ChatGPT knew exactly what course of treatment I needed. I wasn’t a complete idiot. I saw a physiotherapist on the side. But honestly, you could not tell them apart. The medical advice was on point. Merely six weeks later, I broke my ankle in the streets of Buenos Aires. And once again, ChatGPT came to my aid, answering all my questions to allay any fears or anxieties that I had about surgery and my ability to return to the dance floor. In each case, I did not view ChatGPT as a medical substitute; rather, it was the digital support that I needed to understand the extent of my injuries and confront a painful reality that I may not dance again (at the time of this chapter, I just took my first steps in the cast boot with one crutch). It may sound silly, but after a torn calf muscle and broken ankle, I figured: if ChatGPT could help me get through a dark period of summer, maybe it could help me finally finish a research article that I started a year prior. The data was collected and the lit review was half started. I just needed that extra push, and there’s nothing more motivating like a broken ankle, weeks on the couch, and nothing left on Netflix to finish a research article. While my co-author took off to Uganda for summer break and writing colleagues retreated to beaches on both ends of the US, I found myself relying on ChatGPT to answer questions about grammar, syntax, content development, expectations for p-values, and formulas to run Cohen’s Kappa.

Prior to ChatGPT, millions of users welcomed Alexa in their homes, bossing her around like a personal assistant or confiding in her like an old friend. Researchers describe this socio-psychological phenomenon as the Eliza effect, which describes people’s tendency to anthropomorphize technology. The phenomenon is named after MIT professor Joseph Weizenbaum who created a chatbot in 1966. I was fully aware of the “relationship” that I had with ChatGPT during my writing process and knew that I needed to corroborate every response as if I were communicating with a sociopath; however, strangely enough, this extra step allowed me to be more diligent as a researcher. For instance, for the methods section of my research article, I needed an explanation and references for “exploratory mixed methods design.” ChatGPT performed its role by offering a structured response, and I did my homework in Google Scholar to ensure the information was correct.

Writing never came easy for me. As a first-generation white Latina scholar and professor, I never felt at home in academia. Not only did my parents never receive a college degree, they barely completed an elementary school education in their home country. For the first 20 years in the US, they struggled economically. Unlike all my writing colleagues, I wasn’t raised around books, libraries, and museums. My love/hate relationship with writing was really a love/love relationship with learning and the drive to one day sound like the academics that inspired me throughout my own education. English wasn’t my first language, but it has been my dominant language since kindergarten. Because I lived in a bilingual home, I lived with doubts about the structures, conventions, and rhetorical norms of the English language. As a college student, I could not possibly ask a professor or tutor to answer every single language or comprehension question that surfaced while I wrote. Writing felt like a perpetual liminal state of deficiency.

When Google first went mainstream in the early 2000s, I used Google ad nauseam to fill any gaps in my knowledge throughout the writing process, which required far more time and labor in comparison to my peers. This summer, ChatGPT essentially put Google out of work: Why did the Argentine dictatorship during the 1970s and 80s permit rock concerts when it banned milongas? How do you determine the mean and standard deviation of the two groups? Is it grammatically correct to say, [insert sentence in doubt]? Actually, I once red-lined ChatGPT’s answer, “That is not correct. This sentence is a fragment (a student sentence from an essay).” ChatGPT apologized and revised the sentence with the same fragment error. “No, that is still a fragment.” It tried once more and finally succeeded. It thanked me for the feedback, which, I admit, felt good to know more than a technology that has a higher IQ than me. Although ChatGPT or Grammarly, for that matter, can’t be fully trusted for grammar advice, there was this comfort knowing that any roadblock regarding content, organization, syntax, word choice, or secondary source recommendations could be shoved to the side with a little help from ChatGPT. I get that good writing comes from those moments of struggle—but it is not sustainable mentally and emotionally to persist indefinitely in those liminal spaces. If an AI technology, much like a collaborator, reviewer, or teacher, could pull you by the lapel toward a state of discovery, self-assurance, and agency, why not?

At the start of the semester, I started to use ChatGPT more robustly so that I could model effective prompt writing for students. For instance, I took a block of icy theoretical prose and asked it to write me a summary. Within seconds, it simplified the main ideas, which allowed me to test my comprehension of the original text, so I could provide helpful feedback on a colleague’s draft. In another instance, a CFP for an upcoming conference aligned perfectly with research I had recently completed. In the past, it would have taken me a couple of hours to write a brief literature review before proposing the objectives of my presentation. And honestly, I have missed many CFPs because of the exhaustive time requirements to prepare a proposal. With ChatGPT, I copied and pasted my literature review into the text box and asked it to write a summary paragraph. From there, I revised and deleted sentences and incorporated relevant citations throughout. Not to sound like an infomercial, but in less than 30 minutes, that proposal arrived in their inbox. I also experimented with ChatGPT in my classroom. To my knowledge, there are no accessible readings about critical language awareness (CLA) for first-year writing students. ChatGPT generated a list of the primary principles of CLA, which I shared with students as a reading guide to accompany the more difficult reading. On a side note, CLA was still too complicated for them, but I plan to use that list to help me write my own article to make it more accessible to first-year writing students.

Critics of ChatGPT argue that the AI software suppresses linguistic diversity and standardizes human language. In one study of linguistic characteristics of ChatGPT generated essays, Herbold et al. (2023) find that AI sentences mirror scientific language while human-generated writing convey a speaker’s attitude and stance (e.g., I believe). In a separate study, Liang et al. (2023) find that AI detection software may inadvertently penalize non-native English writers because they have limited linguistic variability in their writing, which raises ethical considerations about the use of AI to evaluate non-native English writers. On one hand, educators can refrain from using ChatGPT because it draws from large language models trained on a narrow linguistic register. On the other hand, educators can teach students to use it for this very reason—challenge student misconceptions about a monolithic academic ethos. When I first asked students what they thought about ChatGPT, one student commented, “It does a great job writing essays that sound authentic and smart.” When I read ChatGPT, I imagine biology textbooks or the terms and conditions page for my iPhone. However, for an 18-year-old student, ChatGPT is organized, clear, and free of grammar errors, so of course it’s “smart.” This linguistic preference for standardized English academic prose needed to be addressed in my course.

For my fall first-year writing course, my unit on ChatGPT is preceded by a unit about common misconceptions of writing, based on several chapters from Bad Ideas About Writing (Ball & Loewe, 2017). Anjali Pattanayak’s chapter, “There is One Correct Way of Writing and Speaking” argues that the standards of “good writing” stem from white upper/middle-class beliefs and assumptions about proper English. Minority students and students from diverse linguistic backgrounds are viewed as deficient in terms of language and literacy. The idea that students would view AI generated prose as “smart” spotlights a larger problem of internalized white language supremacy (Inoue, 2021). Based on the large data sets that ChatGPT was trained on and its complex computational structures, there is a preferential treatment for scientific discourse, predictable storylines with flat characters, and punchlines so bad you have to laugh. Given its linguistic and discursive limitations, can ChatGPT actually teach students to be good writers? More importantly, if students admit to using it as part of their writing process, would professors or peers assume there must be something lacking in them, such as creativity, intelligence, writing competence, standardized English language proficiency, morality, and/or a strong work ethic? Moreover, why should educators even bother with ChatGPT? The objective of this unit is for students to:

  • Compare and contrast the linguistic rhetorical variability of a published literary essay with an essay on the same topic generated by ChatGPT
  • Compose an essay using ChatGPT with the objective to “fool” a hypothetical professor
  • Develop ChatGPT prompts to generate ideas, summarize and outline text, and revise various rhetorical elements of an essay
  • Conduct a rhetorical analysis of revision decisions and strategies about composing the “cheater” essay
  • Describe ways in which ChatGPT can be used ethically to “tutor” writers without standardizing their voices and marginalizing their linguistic backgrounds

For the first assignment, students read Jia Tolentino’s 2019 The New Yorker article, “The Age of Instagram Face: How Social Media, FaceTune, and Plastic Surgery Created a Single, Cyborgian Look.” The essay explores the idealization and standardization of the Instagram face and how users, predominantly female, popularized a new aesthetic within the digital landscape of Instagram filters. This central theme of standardization and the marginalization of those who do not meet the beauty ideals of users with the most followers is applicable to ChatGPT, which is another digital platform that privileges a community standard, and in this case, a linguistic and formulaic writing standard. The connection to ChatGPT is addressed after students analyze the Tolentino article for its linguistic variability and rhetorical decisions. Once students have a comprehensive understanding of the article, they then read an article with the same title and topic, but this time, generated by ChatGPT. Students are informed that one article is authentic and the second is a fake. They have to analyze which article was published by the real Tolentino. Students draw from their first analysis to compare and contrast the two versions.

In the second assignment for this unit, students compose an essay using ChatGPT with the objective to “fool” a hypothetical professor. If students can identify a writer’s ethos and voice from the prior assignment, can they use ChatGPT to generate an essay for a specific reader who is fully aware of the genre conventions and academic “voice” of ChatGPT? As students work on their cheater essay, they journal about their revision decisions and strategies. Students answer questions, such as:

What do you accept/reject and why? At any point, do you use a search engine like Google to gather more information? What other resources do you draw from to ‘fool’ your professor (e.g., prompt, sample papers, tutorials, course materials)? As you’re working on the essay, does it feel like cheating, why or why not?

Describing the second assignment as the “cheater” essay is both rhetorical and practical. The name came about from students’ initial confusion about whether they had to write this essay themselves or use ChatGPT to write it. The word “cheater” clarifies to students that the assignment is part of a larger experiment about what it would be like to cheat, which is later described and analyzed in the final report. I do stress to students that the “cheater” essay cannot be used for any other assignment on campus.

Once students complete their cheater essays, they collaborate in groups to create a rubric and use the rubric to leave feedback on peers’ drafts. Students are expected to provide two layers of feedback, one generated by them and the second generated by ChatGPT. The second layer of feedback teaches students how to generate helpful prompts, such as, “Create an outline of the main ideas conveyed in my paragraph” or “Which ideas in this paragraph about _________ could be developed further? In my essay, I am arguing_______”. The prompts teach students to be explicit about the rhetorical situation and their objectives as writers. Writers use both sets of feedback to revise the essay and later share which set of feedback they find more useful and why. While working on the cheateressay, students must decide when it is necessary to generate their own content to address their peer feedback and when ChatGPT can assist them with their revisions. Part of the assignment is to avoid the following ChatGPT rhetorical elements, which students generated with me and discovered were “red flags” for getting caught cheating:

  • Overuse of passive voice
  • Reads like a 5-paragraph essay
  • Sentences and paragraphs are the same length
  • Unnecessary use of “big words”
  • Seldom elaborates
  • Does not understand what readers expect or want
  • Lacks references or citations
  • At times, makes no sense or contradicts itself

Interestingly, most of the rhetorical elements of ChatGPT writing are the elements of a standard 5-paragraph high school paper.

Once students have finished the cheater essay, they write an APA style research report that begins with an introduction that contextualizes the controversies associated with ChatGPT in the writing classroom; a methods section that explains the three-step project; a discussion and analysis section that explains the topic of their cheateressay and the multiple rhetorical decisions made throughout the revision of the cheater essay (e.g., paragraphing, content development, word choice, and syntax). The writing project concludes with a reflection about the rhetorical writing process, authorship, and the ethical considerations of using ChatGPT as a writing tool. The aim of this assignment is for students to develop their rhetorical awareness as writers, reject any notions they may have about being deficient language users, and develop sophisticated invention and revision strategies to write for different audiences.

Student outcomes have been mixed thus far. Early in the writing process of the cheater essay, students who applied rhetorical strategies to develop situational prompts in ChatGPT were pleasantly surprised by how quickly they could complete a 1200-word essay. While revising the essay, they also reviewed the class-generated rubric to avoid common rhetorical moves executed by ChatGPT, which, in effect, were common rhetorical moves of formulaic writing. During the first semester, students felt dishonest writing and revising an essay with ChatGPT, even if they generated a majority of the writing. After one semester, only a handful of students felt this way, which suggests that student attitudes about ChatGPT may already be shifting.

Students with limited rhetorical strategies struggled to develop effective ChatGPT prompts. A common strategy was to rephrase the same prompt. For instance, if prompted, the current version of ChatGPT will not write a 1200-word essay. Some students repeatedly rephrased the same question, prompting ChatGPT to write a 1200-word essay, rather than reviewing the short paragraphs generated by ChatGPT to locate areas to develop. Consequently, this group hated using ChatGPT and claimed they would not use it again to write an essay. Although it might be reassuring to some that students would refrain from using AI, these students struggled the most in writing. Further scaffolding and educational support are needed to help all students effectively, efficiently, and ethically use generative AI software throughout the writing process.

Is ChatGPT the new disruptor in higher education that will turn a generation of students into cheaters or is it a complex digital tool that students can use to offload certain cognitive resources to redirect their creative energy to other parts of the writing process? I am sure, like the way Google made some of us stupid and Twitter made us a little more bitter, AI software like ChatGPT will help us cut some corners and “cheat” just a little, which raises the question, “Why am I being asked to write this in the first place?” In 2023, when we have absolutely no idea what writing will look like in 2050 and what jobs will be replaced by AI, maybe we could join Team Opportunity 2.0 and take another look at our writing curriculum as well as our digital resources and writing tools to ask ourselves what we expect from students as complex language users navigating a multimodal linguistic landscape.

 

Questions to Guide Reflection and Discussion

  • Reflect on the author’s personal experiences with ChatGPT. How do they illustrate the potential benefits and challenges of using AI in academic and personal settings?
  • Discuss the ethical implications of using AI like ChatGPT for academic writing and personal advice. What boundaries should be considered?
  • Explore how the use of AI tools like ChatGPT could change traditional approaches to teaching and learning. What are the possible long-term impacts on educational practices?
  • Consider the role of AI in enhancing accessibility and reducing barriers for individuals with physical limitations or injuries, as described by the author.
  • Debate the potential for AI to either support or undermine academic integrity. How can educators balance these concerns?

 

References

Ball, C. E., & Loewe, D. M. (2017). (Eds.). Bad ideas about writing. West Virginia University Digital Publishing Institute. https://textbooks.lib.wvu.edu/badideas/

Herbold, S., Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). AI, write an essay for me: A large-scale comparison of human-written versus ChatGPT-generated essays. arXiv preprint arXiv: 2304.14276.

Hilles, S. (2014). To use or not to use? The credibility of Wikipedia. Public Services Quarterly, 10(3), 245-251.

Howard, R. M., & Robillard, A. E. (Eds.). (2008). Pluralizing plagiarism: Identities, contexts, pedagogies. Heinemann.

Inoue, A. B. (2021). Above the well: An antiracist literacy argument from a boy of color. University Press of Colorado.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv: 2304.02819.

Marche, S. (2022, December 16). The college essay is dead. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/

Murley, D. (2008). In defense of Wikipedia. Law Library Journal, 100, 593-599.

Pattanayak, A. (2017). There is one correct way of writing and speaking. In C. E. Ball & D. M. Loewe (Eds.), Bad ideas about writing (pp. 82–87). West Virginia University Digital Publishing Institute. https://textbooks.lib.wvu.edu/badideas/

Tardy, C. (2010). Writing for the world: Wikipedia as an introduction to academic writing English Teaching Forum, 1, 12-27.

Wardle, E., & Downs, D. (2014). Writing about writing: A college reader. Macmillan Higher Education.


About the author

Mary Lourdes Silva (she/her) is Associate Professor of Writing and Director of First-Year Writing at Ithaca College. She received a PhD in Language, Literacy, and Composition Studies from UC, Santa Barbara, as well as her Masters of Fine Arts in Creative Nonfiction from Fresno State. Her past and current research examines the citation practices of first-year college writing students; pedagogical use of multimodal and multimedia technologies and practices; implementation of institutional ePortfolio assessment; gender/race bias in education; movement-touch literacy as a modality to teach reflective thinking in first-year writing; and the psychological and financial cost of faculty compelled to review biased student evaluations of teaching. She is also a community organizer and teacher in the upstate New York Argentine tango community.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Utah State University is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.