20 Fit to Resist in Post-Product Space:  Underserved Student Populations and Generative AI’s Writing Norms

John Paul Tassoni

Abstract

This essay describes ways the specter of AI altered interactions with students in a first-year composition course and how the response speaks to the broader impact of generative AI on underserved populations–especially in the sense that large language models and writing instruction can both affirm center/dominant discourses, values, and practices. The essay argues that the emergence of AI should shift writing instruction for “at-risk” and other marginalized student populations into post-product space. In such space, course work emphasizes students’ negotiations with the expectations of higher education’s hidden curriculum alongside the finished products that AI can now always already provide.

Keywords: post-product, generative AI, basic writing, accelerated learning programming, first-year composition

 

Introduction: Now in the Future

Various aspects of higher education position first-year composition (FYC) courses as sites in which underserved populations gather. As Ritter (2023) explains in Beyond fitting in, FYC is likely to include students who lack prior college credits, which leaves FYC “stand[ing] to be perceived as the de facto remedial course simply through exclusion of more privileged students who no longer must take the requirement” (p. 11). In her “Foreword” to the same volume, Gere (2023) highlights the implications of this standing in relation to literacy education: “[L]iteracy skills mark individuals,” she writes, “and the marking can cleave groups into those who feel competent and included and those who do not” (p. x). It stands to reason, then, that the ways FYC teachers respond to the emergence of generative AI can (re)shape not only the literacies of underserved student populations but also how those literacies are marked in and beyond FYC. This situation offers FYC teachers a key role in exploring the implications of AI for such students and preparing them to interact effectively with the technology.

Although this collection’s readers will not necessarily be teachers of composition, I think I am on firm ground when I assert that generative AI programs will impact any course that employs writing as a form of assessment and that FYC pedagogies responsive to the emergence of text generators can (re)shape underserved students’ approaches to writing assignments at all levels of the academy. As I write this paper, countless FYC teachers are for the first time conducting courses that encourage students’ explorations regarding possibilities for AI’s use in college writing. While I describe in this chapter ways that I am incorporating generative AI into my current FYC course, I do so only briefly and share just a few of these ideas. Keeping my eye on recent scholarship focused on AI, I’m finding my assignment ideas are not novel; I’m also aware that, given the exponential advances we are currently witnessing, any exercise I describe, not to mention any claims I might make about large language models themselves, might well be obsolete by the time this paper reaches publication. What I do want to emphasize, though, are ways AI’s emergence could impact “at-risk” and other marginalized student populations; I want to call out issues and pedagogical approaches relevant to these populations that faculty across the curriculum might consider as higher education moves further into what could be a post-product era in college writing, one in which a good deal of instruction and students’ development as writers occur in the aftermath of a “final draft.”

In the section that follows, I narrate my first encounter with AI through discussions surrounding a paper submitted in a FYC class. The section illustrates how generative AI can impact views on writing pedagogy, especially in terms of ways AI’s products and writing norms might displace (rather than creatively alter) the rhetorical negotiations that students and teachers undertake as part of a recursive writing process. Then, in the section titled “This Fit,” I describe specific issues that my encounter with the FYC paper surface in regard to “at-risk” and other marginalized student populations, especially as to ways AI ushers a “fit” for students whose literacies might be otherwise dismissed, if not denigrated, in higher education. “Something Now to Teach,” the succeeding section, then summarizes changes to writing instruction that I am undertaking now in conjunction with generative AI. This section underscores the discussions/reflections such changes seek to generate and how these post-product discussions/reflections address underserved students. The “Conclusion” imagines ways a post-product approach, one that centers rather than restricts students’ collaborations with large language models, might alter (or not) teachers’ interactions with these student populations.

Finding Fit? 

My first awareness of AI-generated texts grew out of discussions surrounding a paper submitted in one of my asynchronous accelerated learning program (ALP) courses. At my open-access regional campus, students determined to be “at risk” (through ACT scores, writing samples, previous GPAs) are encouraged to enroll in FYC along with a corequisite, two-credit ALP section (taught by the same instructor) that provides additional support as students complete the general education requirement. Students in my ALP sections, then, beyond even those literacy markers that typically attend students in FYC, exhibit markers that my institution deems even further outside the “fit” of what Ostroff (2001) would call the “mythical average [student] norm” (p. 1.9).

While ALP students can prove difficult to engage outside of course exercises, there are still class members who reach out to me often as they draft their papers. The draft work for the paper I mention above, for instance, involved several Zoom meetings and nearly 100 email messages. The content of such interchanges frequently underscores for me the degree to which our ALP students grapple not only with the demands of higher education but also with emotional and developmental disabilities, the daily demands of a job, and frequent domestic disturbances. These difficulties often manifest in students’ writing, and we might see such manifestations as signs of students’ struggles in regard to their “fitting in,” their struggle to meet expectations that guide assessment in FYC and in classes to come. These struggles mark more than students’ failures to demonstrate certain skills; the struggles also reflect ways an institution’s expectations, anchored to notions of a “mythical average norm” (Ostroff, 2001, p. 1.9), fail to account for the range of circumstances that shape students’ educational experiences. FYC courses, ALP classes in particular, stand at the center of this mutual struggle.

Then, almost instantly, one paper submitted to me for a grade exhibited no such struggle. I received the final draft of a research-based argument, the course’s most heavily weighted assignment, just two days after my most recent conference with the writer. In that Zoom call, I had labored still to find the thread of the paper’s argument, and I puzzled over possible ways the author might (re)organize their information effectively. I had also lamented (out loud) how little time there was now (in early December) for the student to conduct the in-depth research I’d encouraged all term, let alone time for them to learn how to document sources appropriately in MLA style and reduce the number of run-on sentences and sentence fragments pervading the prose. Two days later, the draft submitted for a grade cited scholarly sources (actual sources at that), none of which had appeared in any earlier iteration of the paper. Citations, all in APA style (which our course did not teach), were formatted accurately (for the most part). Sentences were varied in length, clear, and free from error; paragraphs were logically arranged; counter-arguments were now considered. The only elements missing in this version were anecdotes that expressed the writer’s own connection to the community issue their paper described and sought to resolve. These had been stories I’d returned to time and time again in our discussions, stories I had tried to stir into significance. Now those anecdotes were gone, and but for the topic itself, this latest draft resembled in no way the ones that preceded it.

After my Google searches turned up no matches, I posted a description of the paper and pasted a passage to a department listserv, the one we have reserved for discussions of FYC and cases of possible plagiarism. Other faculty members emailed back to say that they did not recognize the paper and/or could not find it after searching Google themselves or consulting Turnitin. One faculty member, however, one among several others in our department who’ve influenced my thinking on this topic, indicated that the posted passage appeared to be something that might have been written by AI. He sent me an article about ChatGPT, as well as a link to a platform that claimed to detect AI-generated texts, a platform he warned was unreliable. The situation, I knew then, was bigger than this one FYC paper.

In a subsequent Zoom meeting where I underscored for the student the vast differences in their two most recent drafts, I received no acknowledgement of involvement with AI, and subsequently, my department chair concluded we could not pursue a case of academic dishonesty (see Nolan, 2023). Given that sources listed in the submitted draft were all actual sources and given the strong claims made in that essay (neither of these traits, at this time, being a typical feature of AI-generated texts), I cannot positively say AI wrote the paper. I’ve since even wondered if the paper represented an altered AI product, one the author worked with in dialogue to convey their own ideas, but given the stark differences between drafts and short time span, I doubt such collaboration occurred. I only know that, now that I was aware of large language models, their specter radically reshaped the ways I would read student writing to come. The emergence of AI served as a wakeup call: I’ve become more proactive in my responses to even the most minor of assignments when I sense something non-human about them. I worry that, given my relatively heavy, 4/4 course load (predominantly FYC), I’d begun to accept the algorithmic as norm; I worry that my approach to FYC students had come to accept the appearance of academic discourse (i.e., fit) over substance.

Regardless of the suspicions that attended my first sense of AI text generators, I nonetheless want to emphasize that the academic integrity statements on my syllabi that followed did not sidestep the promise of human/AI collaboration. I’ve been fortunate enough to work with colleagues who have researched digital rhetorics for decades, as well as several emerging scholars who, through our center for writing excellence, have conducted multiple workshops on the topic. These teacher-scholars have from the time of ChatGPT’s emergence in fall 2022 encouraged us all to focus on the possibilities of this technology. The conversations they’ve fostered inform what I write below. The colleagues I credit here work at our university’s selective main campus; my goal is to articulate the implications of AI/human collaborations for FYC at our open-access regional campus, where underserved student populations (particularly working-class, working-poor, and first-generation students, as well as students with documented and undocumented learning disabilities) are more likely to enroll.

This Fit 

Without teachers’/prompters’ critical interventions, text generators like ChatGPT can reinforce FYC’s function as a site that imposes white, Western, male, and affluent ways of thinking and speaking on all college students regardless of demographic (Barnett, 2000; Bloom, 1996; Committee, 1975; DeRuiter, 1996; Kennedy et al., 2005; Young et al., 2014). Describing the ritualistic nature of FYC courses aligned with such expectations, Owens (2009) suggests FYC can be discerned as a place where students:

must “make certain gestures” as writers, accommodat[e] and appropriate specific forms of writing that in their totality comprise a password that purifies them, granting them release into the rest of the curriculum. Here the compulsory writing course, with its required research paper or grammar drills or portfolios (depending on the pedagogical orientation of the instructor), is filled with its own rites aimed at purifying the student’s discourse, cultivating a degree of rhetorical methodological hygiene. (p. 224)

So prompted, large language models represent just this sort of hygienic, one that perpetrates the exclusion of discourses unaligned (“the unwashed” in need of remedy) with traditional standards, standards (plural singular) that align with the rhetorical practices of dominant/center groups (“the washed”). Generative AI, after all, draws on information developed by human beings and, as a result, reflects biases/hierarchies that shape human life (Cheuk, 2021; Ferrera, 2023). While attempts are being made to reduce generative AI’s toxic tendencies (OpenAI, 2022), studies have shown that AI’s products can display bias against women, African Americans, Muslims, senior citizens, and people with disabilities (Bianchi et al., 2023; Biddle, 2022; Cheuk, 2021; D’Agostino, 2023B; Ferrera, 2023; Rock, 2023; Wellner & Rothman, 2020). Without (and sometimes even with) critical prompting, current text generators produce texts that reflect a sort of de facto (racist, sexist, classist) conventional wisdom (Merci, 2023; Wright & Kaus, 2023), an exclusionary worldview constructed by virtue of AI’s predictions as to what words are likely to follow one another in a given context (Wellner & Rothman, 2020).

Dependent upon common linguistic probability distributions, generative AI’s word choices and sentencing also reflect a language bias default that further normalizes the patterns practiced by dominant/center populations (MLA–CCCC, 2023). As D’Agostino (2023B) points out, these patterns evidence that “large language models are trained online, where data sets are often in standard American English. For this reason, AI outputs are often not representative of the depth and breadth of many students’ multicultural and multilingual experiences.” Without training in the use of AI, students who speak Appalachian English or an African American vernacular, for example, could uncritically adopt/submit the generic AI-generated text over voices they might otherwise consult and construct, a development that, Dumin points out, could diminish the language’s “diversity of writing and of sound” (quoted in D’Agostino, 2023B). Generative AI can instantly produce writing for students that replicates the algorithmic “gestures” to which Owens (2009) refers; these gestures are those that have come to signal students’ belonging, or fit, within higher education.

Through position statements like “Students’ Right to Their Own Language” (Committee, 1975) and through curricula that invite code switching and code meshing (Young et al., 2014), compositionists have sought ways to open the singular plural view of standards to one that is truly plural (see Fox, 1999). Nevertheless, traditional Western, white, male, affluent standards (e.g., a privileged dialect, the valuation of objectivity over the emotive, linear patterns, concepts of correctness that override language variations, documentation that signals ownership of ideas, concision, etc.) still undergird assessments of college writing. Students on the margins can feel the need to follow along, to make those gestures that signal their “purification” and, subsequently, their fit (D’Agostino 2023B). Whatever the source of my FYC student’s final draft, the negotiations with rhetoric we’d undertaken to that point and the relevance of the topic to their own life gave way, in my view, to a voice they had come to believe was more valid than their own (see D’Agostino 2023B). I’m spending time on this rather dark view of generative AI because I believe the students referred to my ALP courses, especially, arrive feeling urged to find their fit among predictive patterns, rather than feeling encouraged to challenge/transform standards (in terms of sentencing, organization, content, perspective) through an exercise of the students’ own diverse literacies (D’Agostino 2023B).

Such desires for fit are not without worth: underserved student populations should very much have opportunities to develop skills relevant to the language of power (Delpit, 2006; MLA–CCCC, 2023), and alignment with accepted norms often benefits elements of my FYC students’ writing. I also believe that AI can become a means to actually get students there–to help them articulate their ideas in a manner they (finally) get heard in the academy. I’m not seeking a static preservation of anyone’s background or utter dismissal of existing standards but rather a curriculum that brings each into dialogue. I’m concerned that underserved student populations, desiring to appear competent and feel included, are likely to feel an attraction to the products (rather than the dialogues) on tap that generative AI presents for them, products that could come to displace the sort of grappling with meaning that my students and I often undertake prior to their submission of final drafts. At the same time, though, I begin to consider ways that AI might help move those grapplings along, generate an even more effective arena for them than those situations that might leave underserved students groping for forms and perspectives with which center/dominant students are already well familiar.

The sort of AI-generated texts I describe above (which are not, I understand, texts that AI necessarily needs to produce) do serve, at the very least, to show underserved students what we might call “givens” of post-secondary education. “Givens” in this sense refers to the “hidden curriculum” (Giroux and Penna, 1979), that array of practices, values, and literacies that the more prepared, often more affluent, students bring to college classrooms, as well as those practices, values, and literacies that teachers often expect of their students, at least in their introductory courses. Although my FYC course considers sample papers aligned with its various assignments, many ALP students, especially, consistently struggle to materialize effective structures for their writing–at the sentence level, in regard to the overall rhetorical pattern, and at the level of content; they often do not transfer the lessons of the FYC course’s scaffolding exercises designed to help students compose finished products. A text generator, though, can in seconds show the ALP students what an argument based on their very topic and our assignment guidelines might look like, and with knowledgeable prompt-engineering, students can have a hand in developing a mentor text directly related to their project goals (Mogavi, 2023).

AI can manifest for underserved students a sense of “given” that the more prepared/center students already have imbibed. While an uncritical and in some cases unethical reliance on AI could further marginalize underserved students’ own literacies, gestures, interests, and concerns, large language models nevertheless help make evident (perhaps in ways a unit’s previously published, sample paper cannot) for these students rhetorical forms that academia assumes as givens. While AI’s current products are far from perfect, they represent, as Graham (2023) writes, “the kind of immediate mediocre quality and unoriginal text that’s ideal for revision . . .  (p. 167). The trick is to help underserved students not to see generative AI as a chance to impersonate a fit with these givens but rather as a way to explore students’ own positionalities and choices in regard to gestures that typically signal a fit–as a way to, Ritter (2023) might say, move beyond the fit.

Something Now to Teach 

My fall 2023 FYC class is in a computer classroom; every student has access to ChatGPT and other text generators as we conduct exercises involving AI. As I opened up ChatGPT for the first time in class, projecting everything onto the screen in front of the room, I mentioned I was using my private email and not any password I use for any other function. I told class members I was not requiring anyone to open an account of their own, and if they did, to be sure to not load any personal information they did not want out there on the web, to be aware that the program would record the conversations they posted, and that their writing would be used by AI to enlarge its database, to learn. So far, several students have asked me to use my own account to have AI compose essays on the students’ topics and to have ChatGPT offer feedback and advice to their drafts in progress. From what I can tell, though, most of my students now have their own accounts, and class discussions and written reflections indicate to me that my students are finding ChatGPT to be a helpful resource and that they discern the difference between using it as an aid and using it to author their work for them. I’m hoping to use insights from this current term to consider how I might also revise my asynchronous version of the course in ways that situate AI as a collaborator.

As it stands now, some key questions I’m addressing and activities I have conducted in my face-to-face FYC course this semester include:

How can we effectively collaborate with AI?: Students produced a first “final draft” of their research-based argument and then had ChatGPT also write a paper for them on the same topic. In class, we explored effective ways to prompt the generator and then students wrote a reflection on the differences/similarities they see between the paper they produced and the one produced by AI. As part of this reflection, students are projecting how their next draft might go on to be similar to or different from the one ChatGPT produced, commenting on aspects such as organization, veracity of sources, indications of bias, rhetorical appeal. I’m hoping that these activities encourage students to see beyond any “misleading impression of greatness” (Jürgen et al., 2023, p. 350) that AI-generated texts might exhibit and be more alert to vague claims, bogus sources, and weak anecdotes that are, at this stage in AI’s development, just as likely to appear. In short, I want this exercise to enhance students’ resistance to bullshit in any writing they encounter and, hopefully, in any they might produce (Marcus and Davis, 2020). Students’ “final drafts”–the ones actually due at the end of the term–will note aspects of the text produced by AI and include a discussion of the rhetorical choices the students made in terms of what to and what not to borrow from AI. In other words, the submission of a “final draft” will also include a description of students’ collaboration with the text generator. This description is intended to, in a sense, manifest the writer who collaborates with AI–rather than the one who might delegitimize their own voice in order to signal fit.

What can AI tell us about our own writing and the nature of feedback?: As students have been revising their papers based on the comparisons they’ve made between their initial “final drafts” and the ChatGPT productions, I’ve encouraged students to ask AI for feedback and advice on their new drafts, feedback/advice based on our grading criteria. I also suggested to students that they ask for blunt feedback and then also ask for AI to provide feedback in a gentler tone. Not only will I ask students write reflections on the degree to which they find AI’s feedback helpful; I will also invite them to discuss/write about the rhetorical dimensions of that feedback and what they might learn from the blunt versus gentler approaches that they can use to shape feedback they provide to other class members. Students will later extend this reflection as they compare my feedback and that they receive from classmates to that provided by AI, focusing once again on the content/usefulness of the feedback as well as the language through which it is conveyed. AI offers a low-stakes way for neurodivergent students who might struggle in social settings to experiment with the feedback process (D’Agostino 2023A). At the same time, this activity’s engagement with feedback should help underserved students critically view AI’s responses rather than uncritically accept the ways of thinking, knowing, languaging that AI’s training has taught it to privilege (Cheuk, 2021).

What can our collaborations with AI teach us about language choices?: As the class nears submission of their finalest “final drafts,” I’m thinking about asking students to select a paragraph from that draft and have ChatGPT produce for them a new version of that paragraph, prompting AI to provide more sophisticated language, grammar, and style. Class members will then discuss/write about how AI’s sentence/language choices impact meaning, (re)shape rhetorical purposes. I’m thinking that an exercise such as this can encourage resistance to any tendency “not to think too deeply about our words” that predictive technology might instill in writers (Baron, 2023) and resistance to any tendency, as Selinger (2015) writes, to “give others more algorithm and less of ourselves.” While compositionists have sought ways to trouble the dominance of standard American English, the availability of generative AI helps make that troubling now even more intentional: We can engineer prompts in such a way as to produce various linguistic registers. We can ask for language variations (not just givens) in regard to different audiences/purposes, weigh our own word choices against whatever AI might produce; marginalized student populations, especially, can move further from viewing their own variations as deficit and grow more attuned to them as something they can bring with them and/or draw on intentionally.

Perhaps above all, these exercises indicate a shift in writing pedagogy from a post-process emphasis to one that might also be called post-product. The exercises with AI that I describe above augment post-process approaches, which highlight the recursive nature (as opposed to a linear model) of the writing process. As Graham (2023) would say, collaborations with AI reflected in these exercises add “multiple dimensions of recursion where prompt-engineering, output curation, fact-checking, and revision become an orthogonal dimension to traditional writing and learning processes” (p. 166). However, the products that AI (always already) provides now move writing instruction even further (beyond) into a post-product space, one where “final drafts” (produced by students in dialogue with AI) lead to further recursion. Even more now, “final drafts” become part of a process that leads toward more repeating, more revisiting, and more reflection, all of which exercise the writer who collaborates with AI.

Post-product activities at once centralize the role of AI as a new given and acknowledge the degree to which students now always already have access to, as I call them above, “products on tap.” The exercises draw students into practices of de-composition in ways that privilege FYC students’ encounters with academia’s givens. As McCruer (2004) writes, this practice of de-composition “[c]ontinually resist[s] a pedagogy focused on finished products” (p. 59); it resists the corporate insistence on efficiency, disrupts construction of the normative writer that text generators like ChatGPT anticipate. Such a de-composing practice can come to represent a first line of belonging for marginalized student populations: While post-process pedagogy could and should persist, what’s foregrounded in post-product pedagogy is students’ dialogic relationship with the givens, not a purification process they must undergo to achieve fit. What students generate in such post-product space manifests more as narrative accounts of their negotiations with standards and expectations. Teachers in and also beyond FYC might look for ways to develop writing assignments that foreground these negotiations, assignments that highlight students’ dynamic engagement with course content rather than only ask them to display results of that engagement in given forms now always already “on tap” (see Hanstedt, 2020; see also Macrorie, 1988).

As I make room in FYC courses for post-product space (space readily available in ALP versions, especially), my grading must shift to reflect the value of these post-product dialogues. A post-product pedagogy provides rationale for labor-based grading. As Inoue (2022) describes this assessment practice, the quality of student writing is still at the heart of the classroom and feedback, but it no longer has bearing on final grades. Weighing heavily the efforts students display in post-product space rather than the refinement/“hygiene” of their writing products, we begin to, as Inoue (2022) would say, “cultivat[e] with our students an ecology, a place where every student, no matter where they come from or how they speak or write, can have access to the entire range of final course grades possible” (p. 3). At the same time, though, I wonder how such grading might consider the increased (but not always visible) labor that neurodivergent students undertake, and about ways social class might determine just how much time overall a student can devote to generating reflection upon reflection in the wake of their “final” products (Carillo, 2021).

I have indeed arranged for students to compose multiple reflections on their engagement with AI, on what that engagement reveals about their writing processes, about standards, students’ own rhetorical decision-making, and possibilities for current and future papers. I’ve also arranged, though, for these reflections to be as nearly weighted as those products traditionally valued in college classrooms (see Zerwin, 2020). In this post-product space, the time that students might otherwise devote to grappling with (in the case of many of my ALP students, with little success nor, for that matter, expediency) scaffolding exercises shifts more to de-composing, to exploring the relevance of givens to what students want to say and how they want to say it, the veracity of evidence, the strength of claims. With the help of AI, post-product space moves aspects of the hidden curriculum to center stage; the traits that had been the source of exclusion for underserved students become forces with which to overtly wrangle and collaborate.

Conclusion: What Might (Not) Lie Ahead? 

Rather than devote so much time struggling through exercises that might little benefit them, underserved students can devote more of their time to reflecting on ways AI might alter their prose: They have now an available picture of what their own sentences could look like, be exposed to a level of variety and notion of correctness the students’ drafts on the same topic might not yet have reached (see Jürgen et al, 2023; MLA–CCCC, 2023). Underserved students can also have a more immediate view of the ways their essays can be organized as well as aspects of their topics to consider. The students can also note the claims and evidence in the AI version and enhance their own research skills as they investigate its veracity, all the while enriching their own understanding of the topic. Perhaps above all, students can make notice of any lifeless language and vague anecdotes that AI currently produces. The students might note these approximations of human life and discern ways their own life circumstances and representations of those circumstances can speak more vividly and convincingly to the urgency of the issue they are writing about. In all of these imagined engagements, I see students as writers who question and collaborate with AI, not people dependent upon it (see Groves 2022), i.e., not people who submit finished products, products that fit, regardless of students’ investment in them and their relevance to students’ own lives. Future research will need to study the degree to which students’ collaborations with AI actualize these possibilities and roles for underserved students enrolled in FYC and courses that follow.

In the midst of these changes, I never want to overlook life circumstances and their weight on a student’s wellbeing, a weight certainly greater than any I might assign a grade in a FYC course. While I can open up post-product spaces, I cannot always provide them in ways that will open in spite of circumstances students encounter outside of school. Despite what background knowledge my FYC students can or cannot bring to our course’s writing tasks, struggles beyond school deplete time they can devote to their education and to evidencing the labor they do invest. On top of a curriculum that weighted heavily the fit my students could present in terms of a finished product, the exigencies at least one student faced evidently increased a demand for expediency, a demand met through submission of a product I saw as detached from their learning as were their previous writings from the typical, expected outcomes of a college writing course. Prior to the submission of that finished product, my student had found time to produce drafts and complete exercises designed to strengthen those drafts. A collaboration with AI might shift such students’ time commitment into more productive space, help them spend less time groping to find their fit among hidden givens and more time wrestling with those givens in ways that fit students’ rhetorical needs. At this point in the term, I still cannot predict what kinds of writing the above exercises with AI will in the end produce, but considering the students who enroll in my FYC courses, especially those referred to the ALP version, I’m going to embrace the unpredictability. It’s unpredictability that these students bring with them, and it’s this unpredictability we need to value.

Questions to Guide Reflection and Discussion

  • How does generative AI influence the inclusion or exclusion of diverse linguistic and cultural identities in academic writing?
  • Reflect on the potential for generative AI to both challenge and reinforce traditional academic writing standards. How might this impact students from diverse backgrounds?
  • Discuss strategies that could help students critically engage with AI-generated texts and discern their utility and limitations in the writing process.
  • How can educators balance the use of AI in teaching writing skills with the need to preserve and encourage students’ unique voices and perspectives?
  • Explore the ethical implications of using generative AI in writing assignments, particularly for students who might already feel marginalized by standard academic expectations.

 

References

Barnett, T. (2000). Reading “whiteness” in English studies. College English 63(1), 9-37.

Baron, N. S. (2023, January 19). How ChatGPT robs students of motivation to write and think for themselves. The Conversation. https://theconversation.com/how-chatgpt-robs-students-of-motivation-to-write-and-think-for-themselves-197875

Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., … & Caliskan, A. (2023, June). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1493-1504).

Biddle, S. (2022, December 8): Anti-Muslim: The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques. The Intercept. https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/?utm_medium=email&utm_source=The%20Intercept%20Newsletter

Bloom, L. Z. (1996). Freshman composition as a middle-class enterprise. College English 58(6), 654-675.

Carillo, E. C. (2021). The hidden inequities in labor-based contract grading. University Press of Colorado.

Cheuk, T. (2021). Can AI be racist? Color‐evasiveness in the application of machine learning to science assessments. Science Education, 105(5), 825–836. https://doi.org/10.1002/sce.21671

Committee on CCC Language Statement. (1975). Students’ right to their own language. College English 36(6), 709-726. https://doi.org/10.2307/374965

D’Agostino, S. D. (2023A, January 20). “AI Writing Detection: A Losing Battle Worth fighting. Inside Higher Education. https://www.insidehighered.com/news/2023/01/20/academics-work-detect-chatgpt-and-other-ai-writing

D’Agostino, S. D. (2023B, June 5). “How AI tools both help and hinder equity in higher education.” Inside Higher Education. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity

Delpit, L. (2006). Other people’s children: Cultural conflict in the classroom. 1995. The New Press.

DeRuiter, C. (1996). Gender issues in college composition. Teaching English in the Two-Year College 23(1), 48-56.

Ferrara, E. (2023). Should ChatGPT be biased?: Challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738. https://doi.org/10.48550/arXiv.2304.03738

Fox, T. (1999). Defending access: A critique of standards in higher education. Heinemann.

Gere, A. R. (2023). Foreword. In K. Ritter (Ed.), Beyond fitting in: Rethinking first-generation writing and literacy education, (pp. ix-xi). Modern Language Association of America.

Giroux, H. A. and Penna, A. N. (1979). Social education in the classroom: The dynamics of the hidden curriculum. Theory & Research in Social Education 7(1): 21-42.

Graham, S. S. (2023). Post-process but not post writing: Large language models and a future for composition pedagogy.  Composition Studies 51(1): 162-168. https://compositionstudiesjournal.files.wordpress.com/2023/06/graham.pdf

Groves, M. (2022, December 16). If you can’t beat GPT3, join it. Times Higher Education. https://www.timeshighereducation.com/blog/if-you-cant-beat-gpt3-join-it?utm_source=newsletter&utm_medium=email&utm_campaign=editorial-daily&spMailingID=23277094&spUserID=MTAxNzczMjY4MjM4MwS2&spJobID=2134142895&spReportId=MjEzNDE0Mjg5NQS

Hanstedt, P (2020). What matters? InSight 15, 9-13.

Inoue, A. B. (2022). Labor-based grading contracts: Building equity and inclusion in the compassionate writing classroom (2nd Edition). WAC Clearinghouse/University Press of Colorado. https://doi.org/10.37514/PER-B.2022.1824

Jürgen, R., Tan Sa., and Tan, Sh. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education. Journal of Applied Learning and Teaching 6(1). https://doi.org/10.37074/jalt.2023.6.1.9

Macrorie, K. (1988). The I-search paper: Revised edition of searching writing. Heinemann.

Kennedy, T. M., Middleton, J. I., & Ratcliffe, K. (2005). The matter of whiteness: Or, why whiteness studies is important to rhetoric and composition studies. Rhetoric Review 24(4), 359-373.

Merci. J. (2023, March 9). Why AI is an opportunity for creative writers: An optimistic outlook for the future of new AI tools like ChatGPT. Medium. https://jakemerci.medium.com/ai-is-a-conventional-wisdom-machine-this-is-an-opportunity-for-creative-writers-5cfd3e80c20e

Marcus, G. and Davis, E. (2020, August 22). GPT-3, bloviator: Open AI’s language generator has no idea what it’s talking about. MIT Technology Review. https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/?utm_medium=tr_social&utm_campaign=site_visitor.unpaid.engagement&utm_source=Twitter#Echobox=1598658773

McRuer, R. (2004). Composing bodies; or, de-composition: Queer theory, disability studies, and alternative corporealities. JAC 24(1): pp. 47-78.

MLA–CCCC Joint Task Force on Writing and AI. (2023). MLA–CCCC Joint Task Force on writing and AI working paper: Overview of the issues, statement of principles, and recommendations. Modern Language Association of America/Conference on College Composition and Communication.

Mogavi, R. H., et al. (2023). Exploring user perspectives on ChatGPT: Applications, perceptions, and implications for AI-integrated education. https://www.researchgate.net/publication/370948138_Exploring_User_Perspectives_on_ChatGPT_Applications_Perceptions_and_Implications_for_AI-Integrated_Education

Nolan, B. (2023, January 14). Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove. Business Insider. https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1?utm_source=reddit.com

OpenAI. (2022). Aligning language models to follow instructions. https://openai.com/blog/instruction-following/

Ostroff, E. (2001). Universal design: An evolving paradigm. In W. Preiser and E. Ostroff (Eds.), Universal design handbook (pp. 1.3-1.12). McGraw Hill.

Owens, D. (2009). Multitopia: Composing at the edge of the map. In D. Reichert Powell and J. P. Tassoni (Eds.), Composing Other Spaces (pp. 219-236). Hampton Press.

Ritter, K. (2023). Introduction: On the precipice. In K. Ritter (Ed.), Beyond fitting in: Rethinking first-generation writing and literacy education, (pp. 1-23). Modern Language Association of America.

Rock Content Writer (2023, April 19). Everything you need to know about ChatGPT bias. Rock Content. https://rockcontent.com/blog/chatgpt-bias/

Selenger, E. (2015, January 15). Will autocomplete make you too predictable? BBC Future. https://www.bbc.com/future/article/20150115-is-autocorrect-making-you-boring

Wellner, G., & Rothman, T. (2020). Feminist AI: Can we expect our AI systems to become feminist? Philosophy & Technology, 33, 191-205.

Wright, R. and Kaus, M. (2023, January 27). War machine learning [Video]. Youtube. https://www.youtube.com/watch?v=ptg92h253k8

Young, V. A., Barrett, E., Rivera, Y. Y., & Lovejoy, K. B. (2014). Other people’s English: Code-meshing, code-switching, and African American literacy. Teachers College Press.

Zerwin, S. M. (2020). Point-less: An English teacher’s guide to more meaningful grading. Heinemann.


About the author

John Paul Tassoni (he/him/his) is a professor of English at Miami University. At Miami, he has served as Director of College Composition, University Director of Liberal Education, and Co-Coordinator of the Regionals Center for Teaching and Learning. He is currently the founding editor of Journal on Centers for Teaching and Learning. His own work has appeared in journals such as College Composition and Communication, Journal of Basic Writing, Pedagogy, WPA: Writing Program Administration, and Teaching English in the Two-Year College.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Utah State University is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.