32 Using Generative AI in the Music History Classroom

Reba Wissner

Abstract

Generative AI affords students and college teaching both challenges and opportunities, though faculty mainly focus on the challenges, as most discussion centers on assignment modification to prevent plagiarism and concentrate on text-generating AI. However, there is an entire world of AI tools that are not text limited that can be used to create non-text-based outputs and can be used for information recall and transferability of student knowledge; these tools are well-suited to the arts. One discipline in which these tools can be used is music; specifically, music history. AI generation tools not specifically created for music, such as chatbots, can be used in multidisciplinary contexts, including in music classes. There are also generative AI tools that can be used for music generation. In this chapter, I discuss how generative AI can be used in music history classrooms for both information recall and having students learn about AI’s potentialities and limitations through close examinations of how they produce music according to proper musical style using the former and generate biographical information using the latter.

Keywords: music, listening, interaction, style, history

 

Anyone who has been paying attention in education circles is aware of the challenges and opportunities that generative AI affords students and college teaching generally. Most of the discussion on this topic has centered on assignment modification to prevent plagiarism and concentrated on AI that generates text. However, there is an entire world of AI tools that are not text limited that can be used in different disciplines to create non-text-based outputs. These tools are well-suited to the arts and students can use them for information recall and demonstrate the transferability of their knowledge. One discipline in which these tools can be used is music, specifically, music history.

Since the COVID-19 pandemic emerged, more and more disciplines and fields began to rely on the use of generative AI and its potential uses, especially in healthcare (Kucukbenli, 2022). But as musicians, we have been slow to look at its applications in the music classroom. AI generation tools that were not specifically created for music, such as chatbots, can be used in multidisciplinary contexts, including in music classes. There are also generative AI tools that can be used for music generation. Because music history courses aim to provide students with specific knowledge, they can use chatbots in class to check their knowledge in real time. But they can also evaluate the accuracy of the chatbot’s information when prompted. Additionally, they can use music generative tools to test their knowledge of musical style. In this study in progress, I am researching how students can use generative AI in music history classrooms for information recall; I am also studying how students learn about AI’s potentialities and limitations through close examinations of how they produce music according to proper musical style using generative music AI and generate biographical information using AI chatbots.

The material in college music history courses, whether for majors or non-majors (the latter courses are often referred to as Music Appreciation or Introduction to Music), focus on two main elements: a history of music and, within that, a history of musical style. Students learn to differentiate music by the sound of a composition, stylistic traits of (and information about) the composers and musicians who wrote music during those eras, and what kind of genres and instruments were present. Given that these are the main learning outcomes for such courses, and ones that are indicated in the syllabus, generative AI can provide students with ripe material for testing their knowledge of style and fact checking biographical and contextual information.

In this article, I will discuss four generative AI tools that can be used in the music history classroom: Music FX (formerly Music LM) and Music Gen, two free online tools for producing AI generated music based on a verbal prompt; and Historical Figures and Hello History AI, two free AI chatbot apps with the option for in-app purchases that allow a user to have a conversation with a historical figure. Both tools are great for having students learn about the limitations of AI through close examinations of how they produce music according to proper musical style using Music FX and generate biographical information using Historical Figures and Hello History AI. I will also discuss how other disciplines, such as art and poetry, can use similar tools to achieve the same pedagogical aims.

Music and Generative AI

While use of generative AI has recently come to the fore—in music, at least—discussions of the possibilities of AI have been present since at least the late 1970s (Roads, 1980). Discussions abounded since then of how AI could be used to identify specific aspects of musical style and rendering attributes common to particular composers and genres (Roads, 1980). These generative AI tools were used for tasks such as writing music to imitate a particular composer or to identify frequently used musical attributes of a particular composer, genre, or style faster and more accurately than humans (Kaliakatsos-Papakostas et al., 2020).

However, with the advent of generative AI in the second decade of the 2000s, these possibilities for AI and music expanded. There are many different options for AI-generated music, but many of them, such as Soundraw and Boomy, are not truly generative in that they store a set of short pieces of music that mix and match with each other based on the user’s selection of attributes such as tempo or instruments. Others, such as AIVA, are limited to a stored corpus in specific, pre-defined musical styles.

When we think of music and generative AI, we think of sound. But even the available AI generation tools that were not specifically created for music, such as chatbots, can be used in multidisciplinary contexts, including in music classes. Because music history courses aim to provide students with specific knowledge, students can use chatbots to check their knowledge and the validity of the information that the chatbot provides.

Music FX and Music Gen

In January 2023, a Google Github post was made announcing a new generative AI tool for music called Music LM (Agostinelli et al., 2023). At some point between August 2023 and March 2024, it was renamed Music FX. Music FX is an example of Music Information Retrieval (MIR), which is an AI tool that aims to create new music through either sound or using notation. The code for Music FX was not yet released as of January 2023, but readers could see the different prompts that would allow the tool to generate music and their resulting music on the Github post. These types of prompts include audio generation from rich captions; long generation by describing a genre; story mode that allows the user to enter certain descriptions and time parameters to generate a single piece with different styles, text, and melody conditioning; painting captioning conditioning in which the caption or description of a painting is used to demonstrate descriptive music; and ten-second long audio generation from text including instruments, genres, places, musician experience level, and eras.

On May 12, 2023, Google opened up its “test kitchen” for Music FX, which allows anyone who signs up the ability to beta test it. Users can input a prompt, and Music FX will generate two versions based on that prompt. They can then award trophies to the best of the two generated versions, thereby helping to continue training the tool and improving the algorithm for future generations. This is meant to ensure that the versions produced become more accurate for the users’ purposes. As of March 2024, there was a disclaimer on the Music FX site that read: “AI outputs may sometimes be offensive or inaccurate.” Another tool, Music Gen, is very similar to Music FX though Music Gen is used primarily by those in the popular music industry (Zhang et al., 2023).

For the purposes of a music history class, which teaches students to identify musical style, some of the prompt types for generating music are more useful than others. In the music history classroom, students can generate music based on the styles that they are seeking and determine whether or not the generated music is appropriate to that style; this includes identifying what elements are missing or are not appropriate to the genre, so the audio generation from rich captions and long generation by describing a genre are the most useful. When entering the portal for the Music FX AI Test Kitchen, the user is guided on how to create a successful prompt: “be very descriptive. Electronic or classical instrument [sic] sounds best”; “mention the vibe, mood or emotion you want to create,” and “certain queries that mention specific artists or include vocals will not be generated.”

This last point is especially helpful when having students identify components of musical style: in the earlier version of Music FX (when it was Music LM), you could not specify in the prompt to generate a madrigal in the style of Claudio Monteverdi. Similarly, language requests for vocal music did not work either; you may request, for example, that the generated music have sung text in Italian, but AI generated lyrics as of this writing are poorly generated and border on gibberish. This could, however, be circumvented by using the composer’s name as an adjective (for example, Mozartean, which lends itself to a great discussion of what makes a piece of music characteristic of that composer’s style). However, as of March 2024, Music FX does generate music using a composer’s name and a request for music sung in a particular language (though it does not always actually generate vocal music with these requests and when it does, it is still usually gibberish). The limit, however, is that it will only generate music for classical and jazz composers; popular singer-songwriters’ names do not work. This output allows students to lean on their knowledge and identify specific musical motives, harmonies, instruments, etc., that are or are not present. If one is teaching a popular music class, the same exercise can be done with Music Gen to generate a song based on a popular artist or time period.

A helpful way to conduct an activity using these tools in class is by having students crowdsource a list of characteristics present in a particular type of music. This activity can be done as a review of the material at the end of a class session, review of assigned reading, exam review, or as an exercise in how to research musical style. It can also be used at the beginning of the next class session after this material was taught for information recall. This crowdsourcing can be done using a site like Padlet or Google Jamboard or just listing the attributes on the board as students call them out. We can ask students the following question, for example: in the late eighteenth century, what can tell us that we are listening to the first movement of a symphony written in Austria? How might that differ if we specify that the first movement of that symphony was written in Italy during the same period? Having students make these identifications allows them to know exactly what they are listening for, what might be missing, and what might be inaccurate for the style, rather than a nebulous “it just sounds right.” Ideally, they should listen for form, phrase structure, cadence structure, instrumentation, performance style like the addition of rubato, harmony, melody, and anything else pertinent to that particular style and genre. For non-major classes, this is especially useful for having them identify instrument sounds (Music FX is particularly bad at this, as my students have noticed; for example, if you state that the piece music be played by an oboe, it usually generates a flute; if you request guitar, it usually generates a harp). This allows you to use it to check students’ understanding of tone color; you can ask them if the instrument you requested is being played and if not, ask them which one is sounding. This shows students that they cannot take what Music FX generates at face value and they must constantly evaluate what they are hearing.

Though Music FX is limited to the generation of music, there are related apps such as DALL-E, Dream, or Midjourney for art that could be used in a similar matter for art history courses, the latter of which is a text to image app that functions similar to Music FX and Music Gen (Hutson & Lang, 2023). Students can follow the above steps to identify. Another option is for students to identify what artist the AI generator was intending to emulate based on characteristics in the image and what might be missing or inaccurate. Other disciplines can also benefit from similar tools like Chat GPT in which students in English classes, for example, can identify the defining components that are present or lacking in a poem or short story that is meant to be in a particular style or by a particular writer (Hutson & Schnellmann, 2023).

Historical Figures and Hello History AI

Two of the greatest challenges of generative AI chatbots are the inaccuracy of information—including fabricated citations—and ethics concerning plagiarism (Kooli, 2023). While we can certainly ask students to vet the information that the AI chatbot provides to them when generating a response, we can and should ask them to recall their knowledge when doing so. ChatGPT tends to be the default when seeking historical information, but there are other more interactive options that can better engage students. Historical Figures (released in January 2023) and Hello History AI (released in March 2023) are both iOS and Android apps that allow users to have conversations with historic figure chatbots as if they were corresponding by text message. While the inclusion of some of the AI figures in the Historical Figures app–or how they have responded to users’ questions–have been controversial, these apps nonetheless allow students to enjoy learning about people from history (Nwanji, 2023). However, some figures are less “knowledgeable” about their pasts than others.

There are a few good ways to use these apps in music history classes—and by extension any other discipline for which an important figure is available in the app. First, ask students to choose a person, such as a composer or musician (or anyone important in a discipline), from the time and place students are studying. Students can decide to write a short biography about the historical figure with specific items to include; alternatively, students can have the chatbot give them information about particular pieces, genres, or historical context related to the person’s life. Teachers will want to keep their prompts short enough so that students can synthesize their research and the information provided by the chatbot into a single paragraph.

After the student chooses their person and what they want to know from them, the first—and most important step—is that students must do their research on whatever topic they choose using scholarly materials or a course text. If students work from primary sources to create historical summaries, this would be the first step (Wissner, 2018). They can then ask the same questions that they used to write their biographies to the app and write a biography from the information the figure themselves provided. Upon comparing the biographies, students can immediately observe the strengths and limitations of AI in music research.

Conclusion

These activities are highly replicable and can be used in any music history classroom, but they can also be used outside of music history. Specifically, Music FX and Music Gen can be used in the theory classroom to gauge student understanding of a particular musical style or chord progression. Since similar apps to Music FX, such as Jukebox AI, can also generate the music in specific styles and by specific performers to test student knowledge or performer or performance style/practice in applied lessons contexts. These activities can also be easily used in online, hybrid, and remote learning contexts. I am currently working on a research study in my music history classes using rubrics and surveys to gauge student learning and recall in these areas. Given what I have seen when using these tools in my classes, I expect that continued use of these AI tools will promote student recall of information on both musical style and biographical information, and student engagement will be higher than in a typical lecture-based course.

While instructors have been grappling with AI’s existence since the spring semester of 2023, I am optimistic of its possibilities, especially in music history pedagogy. What students—and many faculty—do not realize is that some of the above activities are being used in the “real world.” For instance, some companies, academics, and programmers are having musicians judge the accuracy of AI-generated music based on their stylistic knowledge to improve the algorithm, thereby improving the generated products (Dervakos et al., 2021). This, then, becomes more than a classroom exercise: it is a way for students to simulate what is happening now in the field of generative AI music.

AI-generated music opens up a larger conversation with students about whether AI is endangering the music profession, conversations I have already had with several of my classes of both music majors and non-majors. Should we fear that AI will supplant composers, songwriters, and performers, or are they “safe”? This is a complicated question, since some industries like video game music are already using AI music on their soundtracks (Plut & Pasquier, 2020). Further, at least one literature review on the use of generative AI in the arts has demonstrated that it is often the case that AI- and human-generated art cannot be differentiated (Oksanen et al., 2023). These facts often lead to subjective conversations, such as whether AI-generated music is aesthetically good, but they should favor objective conversations as to whether the music being written meets the criteria for that style (i.e., follows the rules of music theory and genre conventions), whether classical, jazz, pop, or any other genre.

Like tools such as Chat GPT, there are, at least currently, many problems with the corpus of knowledge to which the program has access; this also extends to music in the college classroom. Fears of plagiarism in music composition are unwarranted—chances are, any student who uses generative AI to write a composition in the style of a particular composer with attributes like proper voice leading and musical gestures, will not be able to do so given AI’s limitations (Kaliakatsos-Papakostas et al., 2020). Further, students could only download a MIDI file of the piece, not a notated score as they would be asked to do, and would have to transcribe the material themselves. Unlike tools such as ChatGPT, which is notorious for inventing citations, tools such as Historical Figures and Hello History cannot generate citations at all, at least not at the time of this writing (though some figures in Hello History do indicate where the information came from, often from Wikipedia). Some faculty may also be hesitant to teach students to use generative AI apps for fear that, despite these exercises, students may decide to rely on them as producing facts, especially if they reveal themselves as reliable or mostly during the exercises.

As generative AI tools become more sophisticated, there will undoubtedly be both more and less pedagogical opportunities to use them in our classrooms. In music, at least, music education researchers are already finding ways to teach different aspects of music using AI (Yu et al., 2023). The thought of using AI in the classroom frequently might scare many educators because it will force them, regardless of discipline, to further adapt their courses and assessments beyond what they think they can do. For now, though, generative AI’s limitations are a strength in that they actually afford us more pedagogical advantages. And that is music to my ears.

 

Questions to Guide Reflection and Discussion

  • How can generative AI tools enhance the teaching of music history, particularly in understanding different musical styles and compositions?
  • Discuss the pedagogical benefits and limitations of using AI-generated music as a teaching tool in music history courses.
  • Reflect on the potential for AI to facilitate interactive learning experiences in music education. What specific features of AI tools are most beneficial?
  • Consider the ethical implications of using AI in music education. How should educators address concerns about authenticity and creativity?
  • How might the use of generative AI change students’ perceptions of music history and their approach to learning about music?

 

References

Agostinelli, A., Denk, T. I., Borsos, Z., Engel, J., Verzetti, M., Caillon, A., Huang, Q., Jansen, A., Roberts, A., Tagliasacchi, M., Sharifi, M., Zeghidour, N., & Frank, C. (2023). Music LM: Generating music from text. Google Github. https://google-research.github.io/seanet/musiclm/examples/

AI Test Kitchen with Google. Music FX. https://aitestkitchen.withgoogle.com/tools/music-fx

Dervakos, E., Filandrianos, G., & Stamou, G. (2021). Heuristics for evaluation of AI generated music. 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021, pp. 9164-9171, doi: 10.1109/ICPR48806.2021.9413310.

Hutson, J., & Lang, M. (2023). Content creation or interpolation: AI generative digital art in the classroom. Metaverse, 4(1), 1-13.Huston, J., & Schnellmann, A. (2023). The poetry of prompts: The collaborative role of generative artificial intelligence in the creation of poetry and the anxiety of machine influence. Global Journal of Computer Science and Technology, 23(1), 1-14.

Kaliakatsos-Papakostas, M., Floros, A., & Vrahatis, M. N. (2020). Artificial intelligence methods for music generation: A review and future perspectives. In Nature-Inspired Computation and Swarm Intelligence: Algorithms, Theory and Applications. X. S. Yang (Ed.), pp. 217-245. Academic Press.

Kooli, C. (2023). Chatbots and education research: A critical examination of implications and solutions. Sustainability, 15(7), https://doi.org/10.3390/su15075614.

Kucukbenli, E. (2022, January 26). AI technology and its role during COVID-19.Insights@Questrom (Boston University). https://insights.bu.edu/ai-technology-and-its-role-during-covid-19/

Nwanji, N. (2023, January 27). AI app allows users to text ‘historical figures’ that range from the Notorious B.I.G to Hitler. Yahoo News. https://www.yahoo.com/now/ai-app-allows-users-text-175218536.html

Oksanen, A., Cvetkovic, A., Akin, N., Latikka, R., Bergdahl, J., Chen, Y., & Sevela, N. (2023). Artificial intelligence in fine arts: A systematic review of empirical research. Computers in Human Behavior: Artificial Humans, 1(2): 100004. https://doi.org/10.1016/j.chbah.2023.100004

Plut, C., & Pasquier, P. (2020). Generative music in video games: State of the art, challenges, and prospects. Entertainment Computing, 33, article 100337.

Roads, C. (1980). Artificial intelligence and music. Computer Music Journal, 4(2), 13-25.

Wissner, R. (2018). Using gallery walks for engagement in the music history classroom. Engaging Students: Essays in Music Pedagogy, 6, http://flipcamp.org/engagingstudents6/essays/wissner.html

Yu, X., Ma, N., Zheng, L., Wang, L., & Wang, K. (2023). Developments and applications of artificial intelligence in music education. Technologies, 11(2): https://doi.org/10.3390/technologies11020042

Zhang, N., Yan, J., & Briot, J.-P. (2023). Artificial intelligence techniques for pop music creation: A real music production perspective. Information Fusion (Preprint): https://dx.doi.org/10.2139/ssrn.4490102


About the author

Reba Wissner is assistant professor of musicology at Columbus State University. She has published and presented on seventeenth-century Venetian opera, Italian American immigrant musical theater, and film, video game, and television music. Dr. Wissner is committed to accessibility and helping others be better pedagogues. She holds Level 1, 2 and 3 Credentials in Universal Design for Learning from UDL-IRN. She has published on pedagogy in the Journal of Music History Pedagogy, Engaging Students: Essays in Music Pedagogy, College Music Symposium, Teaching History: A Journal of Methods and several edited collections. She was the 2022 recipient of Columbus State University’s Scholarship of Teaching and Learning Award and named a Governor’s Teaching Fellow at the Louise McBee Institute of Higher Education at the University of Georgia in 2022.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Utah State University is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.