2 Some Ethical Considerations for Teaching and Generative AI in Higher Education

Lydia Wilkes

Abstract

The chapter discusses some ethical considerations for teaching and generative AI in higher education with a focus on large language models like ChatGPT. These considerations include generative AI’s effects on the environment and labor, its reliance on data and tendency to violate privacy, the biases encoded in it, and the digital divide. As more sophisticated generative AI is integrated into word processors, these ethical concerns will become more difficult to perceive and the ethics of using AI more difficult to parse. This chapter concludes with a note on power and justice.

Keywords: ethics, bias, environment, labor, justice

 

On warm days, OpenAI’s ChatGPT-4 gets thirsty. The supercomputer that powers it, located amid cornfields near Des Moines, Iowa, needs water to avoid overheating (O’Brien, Fingerhut, & the AP). In 2023, Microsoft, which backs OpenAI, reported a 34% increase in its global water consumption between 2021 and 2022, much of it presumably due to cooling the supercomputers that power large language models (LLMs) like ChatGPT. One research team estimates that ChatGPT consumes about 16 ounces of water for every 5-50 prompts it processes (O’Brien, Fingerhut, & the AP, 2023). Meanwhile, millions of people in the US, many of them Black, Latinx, or Indigenous, do not have adequate access to drinking water and water for sanitation (O’Neill, 2023), to say nothing of global water scarcity. Water consumption is but one ethical consideration that teachers, staff, and administrators in higher education must foreground as we prepare ourselves and our students to use generative AI responsibly.

Known for its secrecy, the tech industry excels at obscuring AI’s built-in biases and environmental and human impacts, a move that prevents users from developing an awareness of these problems and potentially questioning their own reliance on digital technologies–or at least using them more judiciously. The usually smooth, imperceptible operation of generative AI language models in search engines, email, word processors, etc., was disrupted by ChatGPT-3.5’s public release in November 2022, itself a matter of ethical concern for many at OpenAI (Hao and Warzel, 2023). Hart-Davidson (White, 2023) noted that when ChatGPT appeared, everyone who used it knew they were interacting with a machine. Shortly after this chapter is published in 2024, the next generation of LLMs will be embedded in word processors, and the smooth, imperceptible operation of tech will continue apace. Laquintano, Schnitzler, and Vee (2023) noted: “The environmental impact of AI, the potential for it to induce extensive job loss, the potential for it to remove thought and care from human work, will not be altogether apparent to the average user of a Google doc who clicks a ‘Help me write’ button and has the tone of their paragraph changed” (n.p.). People tend not to notice AI when it’s embedded in familiar technologies.

Since our collective awareness of writing with AI will fade quickly as new generations of generative AI replace the writing assistants already built into digital scenes of writing, professionals in higher education have a fleeting moment to take advantage of widespread public awareness about generative AI as a machine. Seizing this moment is especially important because most people harbor the false notion promulgated by the tech industry that machines are neutral and objective and therefore more trustworthy than flawed, subjective humans (Coeckelberg, 2020; Crawford, 2021; Noble, 2018). Further, the tech industry has so far avoided legal liability for harm caused by their GPTs (Center, 2023). Teachers in higher education have an opportunity to promote a critical understanding of these machines as human creations that replicate or amplify biases in their algorithms and training data on top of the environmental and human impacts of the technology itself. Promoting this critical understanding is crucial because if people continue to see writing machines as neutral and objective, the biases encoded in those machines will persist or worsen through human-machine writing.

Finally, the burden of ethical decision-making falls more on individual users now than it did with digital technologies that have transformed society in the past fifty years (White, 2023). Whereas the internet, the world wide web, and mobile technologies were developed with government funding and regulation, AI has been developed by private companies in a regulatory vacuum. Governments have had to scramble to catch up, as evidenced by the Biden administration’s 150-page executive order proposing new safety and security standards for AI in late 2023. The length and breadth of the order reveal just how much catching up the US government had to do. Higher education in 2023 had to scramble to catch up, too, as ChatGPT surfaced long standing concerns about academic honesty. Yet academic honesty is but the tip of the ethics iceberg—an iceberg that, unlike many real ones rapidly melting in warm polar waters, remains massive and obscured like the black-boxed operation of AI itself and tech companies’ evasion of responsibility for a product The Center for Humane Technology (2023) compares to nuclear weapons. As we in higher education respond to this new era in digital technology, we must center ethical considerations as part of teaching the critical AI and algorithmic literacies a college-educated person will need (White, 2023). Although frameworks for AI ethics abound, “Unlike medicine or law, AI has no formal professional governance structure or norms—no […] standard protocols for enforcing ethical practice” (Crawford p. 224). With government regulation lagging well behind industry, the onus of ethical use is on individual users, and the onus of educating some of those individual users is on professionals in higher education.

To guide pedagogical decisions and help teachers, staff members, and administrators navigate an AI era, this chapter outlines some ethical considerations surrounding AI that should inform critical use of AI—use that questions AI based on knowledge about what it’s made of, how it works, who benefits from it, who suffers as a result of it, who is liable for damage it causes, and so on. Specifically, this chapter addresses ethical considerations related to AI and the environment, labor, data, privacy, bias, and the digital divide. It concludes with power and justice. Readers should note that additional ethical concerns about AI’s role in society are outside the scope of this chapter and covered elsewhere (e.g., Blackman, 2022; Coeckelbergh, 2023; Crawford, 2021).

Environment

As the beginning of this chapter showed, AI has significant environmental costs that should lead teachers to think carefully about right and wrong ways to use it as part of the everyday work of higher education. As teachers consider when, how, and to what extent they will incorporate content generators like ChatGPT (text) and DALL-E (images) into their classrooms, what boundaries they will place on student use, and what boundaries they will place on their own use of emerging tools (such as for grading student writing), environmental impacts must remain front and center because tech companies so often hide them.

Noting that tech corporations refuse to disclose exactly how much energy AI models consume, Crawford (2021) stated that “the data economy is premised on maintaining environmental ignorance” (p. 43). This environmental ignorance includes the massive scale of energy, mineral, and water consumption required to build, transport, network, and power electronic devices and the data centers and supercomputers they rely on. Crawford and Joler’s (2018) anatomy of an AI system illustrated the sweeping environmental and human impacts of the Amazon Echo personal assistant, from the rare earth elements in its components to its electricity consumption to the submarine cable infrastructure along which its data and internet travel. The airy language associated with the cloud conceals its materiality: this “backbone of the artificial intelligence industry […is] made of rocks and lithium brine and crude oil” (Crawford, 2021, p. 31). Rechargeable lithium-ion batteries draw power from electricity grids, themselves still largely powered by fossil fuels, and in turn power the laptops and mobile devices now considered essential to teaching and learning in college and in most other aspects of life in the United States, even for people who can’t afford them. As the half-dozen or so tech companies that dominate “large-scale planetary computation” (p. 20) race to train and release new language models, large and small, they accelerate the consumption of natural resources in ways often imperceptible to users. We already see this in the planned obsolescence of devices like smartphones designed to become electronic waste shipped to developing countries after only a few years of use.

Once someone has recognized the environmental harms of AI as part of a larger computational infrastructure, acting on that recognition can be quite difficult. For one, it can be tempting to dismiss the strain on the planet added by specific areas of AI (such as the natural language processing and machine learning behind generative AI like ChatGPT) as but a drop in the bucket compared to other areas of AI (such as smart weapons developed for military use) and so not that concerning. Given the efficiency imperative propelling most work in the U.S., the individual costs of not using generative AI to assist with brainstorming, drafting, reviewing, and revising texts (and images, sound, video, etc.) may seem to outweigh the environmental concerns, especially when AI is embedded in existing technologies. And although “OpenAI estimated that since 2012, the amount of compute used to train a single AI model has increased by a factor of ten every year” (p. 43), this environmental impact may seem to be OpenAI’s problem and not one individual users should necessarily consider in their everyday use of AI. Further, it’s tempting to heed the techno-optimist argument that AI will recognize patterns in massive sets of environmental data that humans miss and hence produce better solutions for the many environmental problems all inhabitants of the Earth must endure. For someone who believes this narrative, the current scale of resource extraction and energy use may seem justified if it would mitigate climate change, biodiversity loss, novel chemical impacts, ocean acidification, etc.

Teachers can guide students to develop their thinking and practice about the environmental impacts of AI from the perspectives of their disciplines, which could include engaging important ongoing debates central to those disciplines. Concrete details like those that opened this chapter on AI platforms’ water consumption can ground those discussions in contemporary evidence. That said, while a focus on the natural resource and electricity consumption of emerging platforms like ChatGPT is helpful now because of public awareness of it as a machine, this focus may have limited effects on ethical thinking going forward as LLMs are further integrated into everyday technologies. For those in higher education, this reality means that we must keep up with yet another aspect of the world that may or may not be closely related to the content we teach. It imposes yet another labor burden.

Labor

While techno-optimists marvel over the efficiency gains in the workplace made possible by AI (e.g., Microsoft, 2023), skeptics question how it will affect the very knowledge workers college produces and critics point to the pervasive exploitation of workers across sectors in the making of AI and worlds made by AI (Crawford, 2021). Ethical questions about labor, like those about the environment, concern the manufacture, production, training, and use of AI. And just as AI’s environmental impacts remain continuous with past environmental damage, AI’s labor impacts continue established practices of extracting the maximum value from a worker by managing their time on task, almost always at the expense of the worker. For example, AI is already integrated into time management systems using predictive algorithms to tailor shifts to customer demand in fast food, intensifying existing practices of labor exploitation developed in the factories of the late 19th and early 20th centuries. At the time, the “integration of workers’ bodies with machines was sufficiently thorough that early industrialists could view their employees as a raw material to be managed and controlled like any other resource” (p. 60). This view persists today and underlies the brutal pace of labor in Amazon’s distribution warehouses, Foxconn’s iPhone assembly lines, and so on, though now employers can monitor and police employees’ micromovements with ease. Indeed, extracting the maximum value from workers has entailed surveilling them since the time of plantation overseers and Bentham’s panopticon, a task now done primarily by trained machines.

Although the automation of labor has long been associated with factory workers rather than well-educated knowledge workers, the productivity gains associated with automating knowledge work suggest that machines will become more central to knowledge and creative work very soon. Early research on LLMs suggested “direct performance increases from using AI, especially for writing tasks and programming, as well as for ideation and creative work. As a result, the effects of AI are expected to be higher on the most creative, highly paid, and highly educated workers” (Dell’Acqua et al., 2023, p. 3). One study of consultants from a global management consulting firm found that workers who performed complex tasks with ChatGPT-4 experienced significant productivity and quality gains over those without AI assistance, though only on tasks within the AI’s capability. With AI capabilities expanding rapidly, the automation of creative knowledge-based tasks may occur even more quickly than it did with industrial tasks, displacing human workers in the process and exploiting those who remain by extracting yet more labor from them in less time.

These labor issues directly affect everyone in higher education and suggest that all but the wealthiest institutions will have to transform as the shape of knowledge work changes. Text-generative AI has already led students to miss practice in writing, which they often see as tedious and pointless. Students do this in part because, like most people, they harbor many bad ideas about writing, like the idea that it’s possible to learn to write in general (Wardle, 2017). But as Writing Studies scholar Elizabeth Wardle notes, “There is no such thing as writing in general. Writing is always in particular” because writing always responds to the dynamics of the rhetorical situation, such as audience, purpose, and context (2017, p. 30). Since most students do not think of writing as responding to a specific rhetorical situation and often focus on sentence-level concerns like grammar and punctuation, missing opportunities to practice writing does not seem like a loss. Further, students are sensitive to the efficiency imperative that drives most workplaces, and for many students, though certainly not all, that means looking for efficiency gains in their studying and writing. Why toil away learning to do something new and difficult, students may increasingly wonder, whether it’s coding in a complex programming language, grappling with a sophisticated mathematical concept, or sustaining a nuanced policy argument, when automated assistants eliminate the need to train one’s mind in ways that may soon seem esoteric? And why do this when the world of work for which higher education prepares them relies on machines for writing, idea generation, and other kinds of creative work? Teachers, staff, and administrators will need compelling answers and updated practices as generative AI grows more sophisticated.

One compelling answer to the question about learning something new and difficult apropos of writing comes from Alexander (2023), who argued that students have “a right to write in an AI era.” For Alexander, writing means more than the effective communication of accurate information; it is also “one of our most powerful and widespread tools for enabling complex thinking and one of the most powerful—and embodied—sites of interconnection between self and other, the individual and the community” (n.p.). The right to write includes “the right to learn about, explore, develop, and experience a wide range of [students’] own human, critical, and creative capacities to write.” This answer refuses the efficiency imperative behind the narrow idea of writing as just communicating information in favor of a broader humanistic notion of writing. Although many students, heeding the culture in which they live, may prioritize efficiency gains over a right to creative, connected, critical, and reflective exploration of language in an environment at least somewhat shielded from the urgency of the industry, Alexander’s research suggests otherwise, finding that many alumni say they miss “the opportunities they had to sit with a piece of writing, even a long research project, and use their writing to reflect and explore” in college. As teachers across the curriculum decide how they want students to use writing, Alexander’s right to write provides a broader way of thinking about the value of writing students do in higher education and a way of expressing that value to students, who may need time to appreciate it.

Finally, given the many misunderstandings about teaching, low social value assigned to it, and attacks on higher education, it is reasonable to expect many aspects of teaching to be automated once colleges and universities believe the efficiency gains are worthwhile and risks manageable, continuing the trend of disinvesting in teaching even at teaching-focused institutions. While both institutions and teachers may embrace more automation in grading, ethical issues persist there, particularly with grading writing, a process that already reinforces racial and ableist biases (e.g., Dolmage, 2017; Inoue, 2015) and interferes with learning (Kohn, 2013). But if machines can be trusted to impart content knowledge and grade student work, then those who understand teaching to be about only these two activities—and wish it to be confined only to these two activities—may support the automation of teaching with a few adjuncts poorly paid to supervise the machines and field student complaints. This nightmare scenario would seem more distant if not for the cuts at West Virginia University caused by its own president’s financial mismanagement; widespread attacks on academic freedom; and the wholesale takeover of Florida’s New College, once a bastion of freethinking, by right-wing ideologues (Associated Press, 2023; Quinn, 2023).

Data & Privacy

Despite copyright protections for intellectual property, data available on the open internet is often considered available to anyone for any use, an idea dating to the internet’s early decades. Crawford (2021) noted: “There are gigantic datasets full of people’s selfies, of hand gestures, of people driving cars, of babies crying, […] all to improve algorithms that perform such functions as facial recognition, language prediction, and object detection” (p. 16). In addition to scraping data from the internet, OpenAI and other generative AI companies have allegedly copied tens of thousands of copyrighted images and books without permission or payment and used them to train ChatGPT and other LLMs (Brittain, 2023; David, 2023; Vincent, 2023). Faculty are not immune. Faculty at Duke University and the University of Colorado – Colorado Springs placed cameras on campus and harvested photos and video of unwitting students, staff, and faculty to train facial recognition systems (Crawford, 2021). These practices rest on the belief that data is no longer personal or individually owned but abstract and nonhuman, a resource for extraction, as seen in the industry expression “data is the new oil” (p. 113). From there, “those who have the right data signals gain advantages like discounted insurance and higher standing across markets,” while the poorest experience “the most harmful forms of data surveillance and extraction,” thereby worsening existing economic divides (p. 114).

The tech industry may not question the ethics of their data mining practices or the premise that data can be taken without user knowledge or consent, but teachers must remain keenly aware of the fact that students own the texts they produce, including their drafts. Asking students to input these texts into AI like ChatGPT for help with grammar and proofreading, for example, means asking them to train AI with their intellectual property, as ChatGPT trains with user input unless the user opts out. Since most students have never heard of intellectual property and may be only vaguely aware of all the data routinely collected from them by their devices, teachers bear a heightened ethical responsibility both to help students understand these concepts and to build in safeguards for students whose understanding isn’t yet developed.

Although opting out of training ChatGPT takes only a few clicks, as text-generative AI is further integrated into word processors, opting out may become more difficult or impossible. And as language models become personalized, able to mimic the style of the human routinely interacting with them, opting out of training may become undesirable, as training the machine to mimic the user is the goal of some language models. Such a language model might avoid issues that models like ChatGPT face, which reproduce the Standard Edited American English in their training data but pose new questions about the hybridization of human-machine writing, as well as academic honesty and intellectual property. As this integration happens, teachers will need to reconsider their beliefs about writing, including the notion that teaching a subject like biology does not mean teaching students how to write like a biologist. Teachers have always needed to teach the kinds of writing and writerly moves they want student writers to make, and this remains true when students use LLMs as part of their writing process.

Further, teachers must take care with our own practice, recognizing that even people with good intentions make ethically dubious decisions. For example, student materials like résumés can be input into LLMs to generate a letter of recommendation for a student in response to a late request or when faculty have many students to recommend. Seeking efficiency with a task like this one, a task intended to help the student, breaches students’ right to privacy by sharing their data with a company that depends on a steady stream of data to train its machines.

Finally, AI in the form of facial recognition poses a tremendous ethical dilemma around privacy as it’s used to surveil students while they take tests, a phenomenon accelerated by the pandemic. While academic misconduct is certainly a problem, teachers and administrators must consider the harms of surveillance and false accusations of cheating as well as how these harms negatively affect the student-teacher relationship and hence the endeavor of education. First, academic institutions and individual teachers using this surveillance technology position students as criminals violating the academic honor code whether they cheat or not and “bring surveillance methods used by law enforcement, employers and domestic abusers into an academic setting” (Hill, 2022). Second, facial recognition’s roots in eugenics and benchmarking using mugshots make it ethically dubious as a technology (Crawford, 2021), and the software used by surveillance companies, Amazon’s Rekognition, has been accused of bias against Black women (Hill, 2022). Third, false accusations have lasting effects on students, especially BIPOC students, leading them to discipline their faces and eyes to avoid future accusations. This example illustrates the reality that “any push for creating AI solutions cannot help but also be a push to get more data about more people, thereby encouraging invasions of privacy” (Blackman, 2022, p. 12). It surfaces the ethical question, what kind of society do we want to live in? More specifically, what kind of educational system do we want to have?

Bias

A well-known problem in machine learning, bias can exist throughout the training process and appear without any intention from those doing the training (Coeckelbergh, 2020). In fact, bias can occur even when scientists and engineers attempt to avoid it (Blackman, 2022; Coeckelbergh, 2020). Bias can happen throughout the process of training an AI: in the selection of the datasets used for training, in the data within those datasets, in the algorithm itself, in the dataset given to the trained algorithm, in the people who make the algorithm, and in society (Coeckelbergh, 2020). In general, any biases that exist in training data will be replicated or amplified by technologies using that training data. For example, when Amazon tried to train an algorithm to choose job candidates on the résumés of existing employees, the fact that those employees were overwhelmingly men led the algorithm to recommend men and downgrade women, which continued even after explicit references to gender were removed; Amazon scrapped the algorithm (Crawford, 2021). On a more harmful note, a ProPublica investigation found that software used to assess the risk of a criminal defendant committing another crime was not only unreliable at predicting reoffenders but clearly racist: it falsely identified Black defendants as future criminals at nearly twice the rate of white defendants, and inaccurately assigned low risk more often to white defendants than Black defendants (Angwin, Larson, Mattu, & Kirchner, 2016). Judges use these scores in their sentencing, sometimes throwing out plea deals in favor of algorithmic judgment. Returning to education, anti-Blackness is so prevalent in school systems that one prominent scholar examines anti-Blackness as “‘the default setting’ of technological hardware, software, and infrastructure” (Online, 2023). When bias comes to light, companies often refine the process of machine learning to produce results with more statistical parity, but do not question the underlying structures and logics of classification on which machines operate (Coeckelbergh, 2020; Crawford, 2021). This happens because bias is perceived as “a bug to be fixed rather than a feature of classification itself” (Crawford, 2021, p. 130).

In machine learning, classifiers are algorithms that sort input into classes, such as spam or not spam in email. The harm occurs when humans are classified and given or denied access to opportunities, wealth, and knowledge by computational processes that encode existing biases. Illustrating the politics inherent in classification, Crawford showed how classifiers encode bias in a dataset widely used for facial recognition: “Gender is a forced binary: either zero for male or one for female. Second, race is categorized into five classes: White, Black, Asian, Indian, and Others” (p. 144). Despite the harmful politics behind these classifications, they are “widely used across many human-classifying training sets and have been part of the AI production pipelines for years” (p. 145). Thus, bias becomes embedded in the deep structure of machine learning, and machines replicate or exacerbate those biases. Further, despite the current consensus that gender, race, ability, and other aspects of identity are socially constructed rather than biologically fixed, classification in AI assumes that they are “natural, fixed, and detectable biological categories” (p. 144). Perhaps most damaging of all, the myth that machines are objective and neutral means that experts like judges and doctors may heed machine recommendations over those of other people, even over their own judgment, not recognizing that machines are biased (Coeckelbergh, 2020). Teachers fall into this trap, too, and as institutions invest more in the use of AI to grade, especially LLMs for grading writing, teachers using it may encounter the same ethical dilemma—if teachers review machine-generated grades, which could compromise efficiency gains.

Turning to writing, text generators like ChatGPT were trained on a very narrow range of language that reflects the demographics of those whose writing comprises datasets. Today’s LLMs rely on a substrate of speech recognition, among other things, trained on large corpuses of text since the 1980s (Crawford, 2021). Classic datasets come from a 1969 federal antitrust lawsuit against IBM and the fraud investigations into Enron, both of which reflect the demographics of employees in those companies—largely white, wealthy, able, heterosexual, and cisgender men from the United States—and their socially and culturally conditioned patterns of language use—Standard Edited American English. More recent datasets layer on top of those classic datasets and draw from the same demographics. Generative AI has been trained on Wikipedia articles for years because of the abundance of factual information easy to scrape, so much so that “‘Without Wikipedia, generative A.I. wouldn’t exist’” (Gertner, 2023). As a result, much of ChatGPT’s output resembles writing on Wikipedia, where stylistic values like clarity, precision, and concision take precedence. More to the point, the linguistic preferences of Wikipedia’s authors and editors, who skew overwhelmingly male and tend to be in their mid-20s or retired (Wikipedia, 2023), have been trained into generative AI, thereby perpetuating only that variety of English as the linguistic standard.

Two professional organizations representing teachers of writing, language, and literature noted that “Students may face increased linguistic injustice because LLMs promote an uncritical normative reproduction of standardized English usage that aligns with dominant racial and economic power structures” (MLA-CCCC, 2023, p. 7). This linguistic injustice occurs for many reasons, including the fact that language is the last aspect of a person’s identity that may be used as a basis for discrimination (Baker-Bell, 2020). Although a professional organization of English teachers declared in 1974 that students have a right to their own language, that right has rarely been respected as people who speak non-standard varieties of English and languages other than English continue to be punished for their language. Many teachers have welcomed the correct Standard Edited American English that ChatGPT outputs despite the “flattening of distinctive linguistic, literary, and analytical approaches” (MLA-CCCC, 2023, p. 6). Yet teachers could also take this opportunity to question their own bias toward Standard English and the harms reproduced by that bias regardless of teacher intent, an appropriate initial response to the demand for Black linguistic justice (Baker-Bell, 2020; Conference, 2020).

Digital Divide

Generative AI adds a new layer to the digital divide, an equity issue related to who has access to digital technologies and the skills to use them. The pandemic revealed the lingering digital divide in higher education as students lost access to computer labs and high-speed internet, which some could not access at home. For example, one in five students at two Big Ten schools lacked access to digital technology to participate in online learning with Black, Latinx, and rural students disproportionately affected; the cost of broadband access was the main barrier (Wood, 2021). This lack of access is compounded by gaps in digital skills in K-12 tied to a school’s socioeconomic status (Ruecker, 2022), which influences who goes to which colleges and who doesn’t. Schools serving minoritized students of lower socioeconomic status tend to focus students’ technology use on remedial drills, while schools serving privileged students of higher socioeconomic status typically “position students as creative users of technology” (p. 3), thereby cultivating the multiliteracies students need for success in a digital age (Selber, 2004). Teacher attitudes in K-12 affect this gap, with teachers in urban schools the most likely to be suspicious of the effectiveness of digital technology, such as Google Docs, and least likely to use it in a comparison of teachers in urban, suburban, and rural schools (Ruecker, 2022). Despite gains in access to digital technologies, skills gaps have persisted over the past two decades and affect who goes to college and how prepared they are to use the many digital tools required in their classes. As LLMs add another tool to the list, it would seem these patterns of disadvantage will continue or worsen.

Additionally, although some LLMs are available to the public without cost, access to more advanced generative AI models already requires payment. Hence, “[s]tudents may have unequal access to the most elite tools since some students and institutions will be able to purchase more sophisticated versions of the technologies, which may replicate societal inequalities” (MLA-CCCC, 2023, p. 7). Considering how significantly teachers’ attitudes toward technology’s effectiveness affect their willingness to use it in the classroom and hence their students’ access to digital skills, discussions about LLMs among colleagues and within programs, departments, other academic units, and professional organizations should include the digital divide with an emphasis on skills. And while undergraduates are most often the focus, graduate students experience the digital divide, too, and their supervisors should consider how to prepare those with skill gaps, which could include directing them to existing support or collaborating to produce new support.

Conclusion: Power and Justice

As much of this chapter suggests, ethical concerns about AI adjoin the broader issues of power and justice. Presently, the priorities of tech companies, themselves beholden to shareholders and venture capitalists, center around profit and technical progress and rely on perpetuating myths about AI’s neutrality and efficiency gains. Another myth these companies rely on is the idea that AI is inevitable. Crawford (2021) called for a “politics of refusal” of this supposed inevitability, one that those in higher education should reflect on: “Refusal requires rejecting the idea that the same tools that serve capital, militaries, and police are also fit to transform schools, hospitals, cities, and ecologies” (p. 227). Teachers can make this idea available to students as one response to the proliferation of generative AI. Many students, sensitive to the incessant demands of screens and toxicity of some online cultures, may welcome the notion of refusal and come to consider the critique of power that undergirds it. The ethical considerations in this chapter are theirs as much as they are teachers’.

At the least, a frame for considering when and how AI should be used is justice: justice for marginalized people and the environment (Crawford, 2021). People have rejected oppressive systems for millennia in favor of more just societies, though that rejection is more difficult when we are collectively stuck in a single global system of power (Graeber & Wengrow, 2021). Crucially, “calls for data protection, labor rights, climate justice, and racial equity should be heard together. When these interconnected movements for justice inform how we understand artificial intelligence, different conceptions of planetary politics become possible” (Crawford, 2021, p. 18). Higher education remains a place to cultivate thought and action against oppressive systems, and carefully considering several ethical dimensions of generative AI, and AI more broadly, can be one conduit toward demanding justice.

 

Questions to Guide Reflection and Discussion

  • Discuss the ethical implications of AI’s environmental impact, particularly its water usage, in the context of global resource scarcity.
  • How can educators navigate the challenges and responsibilities of using AI tools in teaching, considering their hidden biases and potential for privacy violations?
  • Reflect on the role of academic institutions in fostering an understanding of AI’s labor implications, especially in terms of job displacement and exploitation.
  • Consider the ethical considerations of data privacy and intellectual property when students and educators use AI in academic settings.
  • How should higher education address the digital divide in access to AI technologies and literacy?

 

References

Alexander, J. (2023, Nov. 22). Students’ right to write. Inside Higher Ed. https://www.insidehighered.com/opinion/views/2023/11/22/students-have-right-write-ai-era-opinion

Angwin, Larson, Mattu & Kirchner, 2016. Machine bias. ProPublica. May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingAssociated Press. (2023, Mar. 30). Anatomy of a political takeover at Florida public college.

Associated Press. https://apnews.com/article/desantis-new-college-florida-woke-timeline-5a5bcd78230ddd2a1adb8021fea8a755

Baker-Bell, A. (2020). Linguistic justice: Black language, literacy, identity, and pedagogy. Routledge.

Blackman, Reid. (2022). Ethical machines: Your concise guide to totally unbiased, transparent, respectful AI. Harvard Business Review Press.

Brittain, B. (2023, Nov. 21). OpenAI, Microsoft hit with new author copyright lawsuit over AI training. Reuters. https://www.reuters.com/legal/openai-microsoft-hit-with-new-author-copyright-lawsuit-over-ai-training-2023-11-21/

The Center for Humane Technology. (2023, Mar. 9). The AI dilemma. https://www.youtube.com/watch?v=xoVJKj8lcNQ

Coeckelbergh, M. (2020). AI ethics. Essential knowledge series. MIT Press.

Conference on College Composition and Communication. (2020). This ain’t another statement! This is a DEMAND for Black linguistic justice! https://cccc.ncte.org/cccc/demand-for-black-linguistic-justice

Crawford, K. (2019). Atlas of AI: Power, politics, and planetary costs of artificial intelligence. Yale.

Crawford, K., & Joler, V. (2018). Anatomy of an AI system. https://anatomyof.ai/

David, E. (2023, Sept. 20). George R.R. Martin and other authors sue OpenAI for copyright infringement. The Verge. https://www.theverge.com/2023/9/20/23882140/george-r-r-martin-lawsuit-openai-copyright-infringement

Dell’Acqua, F., et al. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321

Dolmage, J.T. (2017). Academic ableism: Disability and higher education. University of Michigan Press.

Gertner, J. (2023, July 18). Wikipedia’s moment of truth. New York Times Magazine. https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html

Graeber, D., & Wengrow, D. (2021). The dawn of everything: A new history of humanity. Macmillan.

Hao, K., & Warzel, C. (2023, Nov. 19). Inside the chaos at OpenAI. The Atlantic. https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/

Hill, K. (2022, May 27). Accused of cheating by an algorithm, and a professor she had never met. The New York Times. https://www.nytimes.com/2022/05/27/technology/college-students-cheating-software-honorlock.html

Inoue, A.B. (2015). Antiracist writing assessment ecologies: Teaching and assessing writing for a socially just future. The WAC Clearinghouse; Parlor Press. https://doi.org/10.37514/PER-B.2015.0698

Kohn, A. (2013). The case against grades. Counterpoints, Vol. 451, pp. 143-153. https://www.jstor.org/stable/42982088

Laquintano, T., Schnitzler, C. & Vee, A. (2023). Introduction to teaching with text generation technologies. In A. Vee, T. Laquintano, & C. Schnitzler (Eds.), TextGenEd: Teaching with Text Generation Technologies. The WAC Clearinghouse. https://wac.colostate.edu/repository/collections/textgened/

Microsoft. (2023). What can Copilot’s earliest users teach us about generative AI at work? https://www.microsoft.com/en-us/worklab/work-trend-index/copilots-earliest-users-teach-us-about-generative-ai-at-work

MLA-CCCC Joint Task Force on Writing and AI. (2023). MLA-CCCC joint task force on writing and AI working paper: Overview of the issues, statement of principles, and recommendations. https://hcommons.org/app/uploads/sites/1003160/2023/07/MLA-CCCC-Joint-Task-Force-on-Writing-and-AI-Working-Paper-1.pdf

Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

O’Brien, M., Fingerhut, H., & the Associated Press. (2023). A.I. tools fueled a 34% spike in Microsoft’s water consumption, and one city with its data centers is concerned about the effect on residential supply. Fortune. Sept. 9. https://fortune.com/2023/09/09/ai-chatgpt-usage-fuels-spike-in-microsoft-water-consumption/

O’Neill, R. (2023, Mar. 22). Addressing a growing water crisis in the U.S. CDC Foundation. https://www.cdcfoundation.org/blog/addressing-growing-water-crisis-us

Online Experiences & Engagement Task Force. (2023, Mar. 6). OEE scholar interview series: Dr. Tiera Tanksley. https://myacpa.org/oee-scholar-interview-series-dr-tiera-tanksley/

Quinn, R. (2023, Sept. 15). Despite national pushback, West Virginia will cut faculty, programs. Inside Higher Ed. https://www.insidehighered.com/news/faculty-issues/shared-governance/2023/09/15/despite-national-pushback-wvu-will-cut-faculty

Ruecker, T. (2022). Digital divides in access and use in literacy instruction in rural high schools. Computers and Composition 64. https://doi.org/10.1016/j.compcom.2022.102709

Selber, S.A. (2004). Multiliteracies for a digital age. Southern Illinois University Press.

Vincent, J. (2023, Jan. 17). Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content. The Verge. https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit

Wardle, E. (2017). You can learn to write in general. In C.E. Ball & D.M. Loewe (Eds.), Bad Ideas about Writing. West Virginia University Libraries/Digital Publishing Institute.

White, R. (2023, Nov. 16). A conversation on artificial intelligence and how it’s impacting our lives. In MSU Today. WKAR Public Media. https://www.wkar.org/show/msu-today-with-russ-white/2023-11-16/a-conversation-on-artificial-intelligence-and-how-its-impacting-our-lives

Wikipedia:Wikipedians. (2023, Nov. 11). In Wikipedia. https://en.wikipedia.org/wiki/Wikipedia:Wikipedians

Wood, S. (2021, Nov. 10). How colleges are bridging the digital divide. US News and World Report. https://www.usnews.com/education/best-colleges/articles/how-colleges-are-bridging-the-digital-divide


About the author

Lydia Wilkes (she/they) is assistant professor and writing program administrator at Auburn University. Her publications include the coedited collections Rhetoric and Guns, and Toward More Sustainable Metaphors of Writing Program Administration. Among many other things, she coedits the Proceedings of the Annual Computers and Writing Conference where she sometimes sees submissions on AI.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Utah State University is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.