14 Indigenous Futures in Generative Artificial Intelligence: The Paradox of Participation

Rogelio E. Cardona-Rivera; J. Kaleo Alladin; Breanne K. Litts; and Melissa Tehee

Abstract

As we work toward expanding and diversifying accurate representations of Indigenous peoples in classrooms, we must also consider the role of Native people in the construction of these technologies. Indigenous communities face the following paradox in the Generative AI space: in wanting to be represented by the Generative AI by sharing data representations their ways of knowing and being, they lose the agency to exert their rhetorical, technological, and data sovereignty over whatever is shared into the Generative AI system. Alternatively, not participating in the Generative AI space continues to perpetuate Western-centric biases and systemic racism built into the existing algorithms.

Keywords: tribal technological sovereignty, generative artificial intelligence, culturally sustaining/revitalizing pedagogy, critical technical practice, Indigenous representation

 

How Indigenous knowledge and narratives are portrayed and by whom matters for culturally sustaining and revitalizing representations. In our work with teachers and Tribal Knowledge Holders to bring Indigenous perspectives into classrooms, we spend a lot of time considering the ethics of representation. Innovating with emerging AI technologies—themselves representational artifacts that indiscriminately model the phenomena they encode (Agre, 1997; Martens & Smith, 2020)—requires acknowledging how these technologies are themselves instantiations of cultural systems that privilege particular ways of being and knowing in the world. While Indigenous identity is not a homogenous concept, a substantial shared value across Tribal Nations is communal self-determination, enacted through sovereignty. In particular, we call attention to rhetorical sovereignty, “the inherent right and ability of peoples to determine their own communicative needs and desires in this pursuit” (Lyons, 2000, p.449). To maintain this sovereignty in a sociotechnical context, we must also consider “technological self-determination and sovereignty” or the control Indigenous peoples have over the technologies they use (Winter & Boudreau, 2018, p.46). These issues of sovereignty are critical considerations in our use of Generative AI for teaching and learning.

Artificial intelligence (AI) technology requires a precise description of three things: (1) its input (data), (2) its output (data), and (3) its internal behavior (algorithm); together, these are the AI’s knowledge representation. Generative AI is a kind of AI system that is created by identifying a class of user-desired outputs (for example, English natural language sentences) and performing statistical analyses over the outputs to identify what inputs could lead to them (for example, particular English language prompts); these analyses are anthropomorphized as “machine learning” within the AI community. With enough data, these statistical analyses become the basis of the AI’s internal behavior, so that when new and unique inputs are provided to the system, the AI computes the corresponding outputs that (statistically speaking) ought to follow from the inputs. What this architecture ultimately entails is that the internal behavior of a Generative AI is bound by the scope of the data that is furnished to drive its human construction; i.e., to drive the statistical analyses needed to base the AI’s internal behavior off of.

Due, in part, to the fact that these technologies are bound by the scope of the datasets used to build them, they perpetuate systemic racism, a problem that even industry leaders cannot yet resolve (Eubanks, 2018; Mitchell et al., 2020; Noble, 2018; O’Neil, 2016). In this context, the question of Indigenous Futures in Generative Artificial Intelligence faces a critical paradox: what we term a paradox of participation. Based on how Generative AI has manifest today, and given that its construction and capability fully depends on its input data (cf. Agre, 1997), Indigenous communities interested in participating in the Generative AI space face the following paradox: in wanting to be represented by the Generative AI by furnishing input data concerning their existence (e.g., their ways of knowing and being, their values, their beliefs), they lose the agency to exert their rhetorical, technological, and data sovereignty over whatever is furnished. Alternatively, not participating in the Generative AI space perpetuates Western-centric biases of construction (Noble, 2018): the digitally-encoded systemic racism that causes deep harm to marginalized—and in particular, Indigenous—communities.

As we work toward expanding and diversifying accurate representations of Indigenous peoples in classrooms, we must also consider the role of Native people in the construction of these technologies, whether AI or not. Without this consideration, technologies continue to advance without input from Indigenous ways of being or knowing, thus further perpetuating the stereotype of Native peoples as existing only in the past. We view this as an invitation to educators to critically consider not only their use of technology but also their design of learning environments in their classrooms and beyond.

 

Questions to Guide Reflection and Discussion

  • Discuss the concept of “rhetorical sovereignty” and its importance in the context of AI development. How can Indigenous communities assert this sovereignty in the digital sphere?
  • Explore how AI can be developed to support and respect Indigenous ways of knowing without perpetuating cultural appropriation or misrepresentation. What are the risks and benefits for Indigenous communities when engaging with AI technologies?
  • How can the design of AI and its applications be informed by Indigenous epistemologies to foster more inclusive and culturally sensitive technologies?

 

References

Agre, P. E. (1997) Toward a critical technical practice: Lessons learned in trying to reform AI. In G. C. Bowker, S. L. Star, W. Turner, & L. Gasser (Eds.), Social science, technical systems and cooperative work: Beyond the great divide. Erlbaum.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Publishing Group.

Lyons, S. R. (2000). Rhetorical sovereignty: What do American Indians want from writing? College Composition and Communication, 51(3), 447–468. https://doi.org/10.2307/358744

Martens, C., & Smith, G. (2020). Towards a critical technical practice of narrative intelligence. Proceedings of the 12th Intelligent Narrative Technologies Workshop at the 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.

Mitchell, M., Baker, D., Moorosi, N., Denton, E., Hutchinson, B., Hanna, A., Gebru, T., & Morgenstern, J. (2020). Diversity and inclusion metrics in subset selection. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 117–123.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Winter, J., & Boudreau, J. (2018). Supporting self-determined indigenous innovations: Rethinking the digital divide in Canada. Technology Innovation Management Review, 8(2), 38-48.


About the authors

Dr. Rogelio E. Cardona-Rivera is an Assistant Professor in the Division of Games, and Adjunct Assistant Professor of Computing and Psychology at the University of Utah. At Utah, they direct the Laboratory for Quantitative Experience Design, a community of scholars who are establishing cognitive principles and developing artificial intelligence technologies to support the design of playful artifacts that help convey stories. Cardona-Rivera is the recipient of an NSF CAREER award, and has served as a Department of Energy Computational Science Graduate Fellow and National GEM Fellow. In 2017, they were recognized as a “New and Future Education in AI” by the Association for the Advancement of Artificial Intelligence (AAAI). Rogelio received the M.Sc. and Ph.D. in Computer Science with a minor in Cognitive Science at North Carolina State University, and the B.Sc. in Computer Engineering from the University of Puerto Rico at Mayagüez.

Haumana (student) to a Kahu M. Kalani Souza. His work focuses on building community capacity, cohesiveness, and resilience around projects that intersect food, nature, technology, and indigenous knowledge systems and spreading aloha through experiential learning.

Dr. Melissa Tehee, J.D., Ph.D, is a citizen of the Cherokee Nation. She is an associate professor at Utah State University in the Department of Psychology, Director of the American Indian Support Project, and Assistant Director of the Mentoring and Encouraging Student Academic Success program for Native American students. She earned dual degrees in Clinical Psychology, Policy, and Law (Ph.D./J.D.) with a certificate in Indigenous Peoples Law and Policy at the University of Arizona. Much of her scholarship and teaching is focused on Cultural Competence development and Indigenous visibility, representation, belonging, and wellness.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Teaching and Generative AI Copyright © 2024 by Rogelio E. Cardona-Rivera; J. Kaleo Alladin; Breanne K. Litts; and Melissa Tehee is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.