The Uncertainty & AI Group: A Year in Review, and What’s Ahead

From ChatGPT to Google Gemini, artificial intelligence tools and chatbots have become a part of everyday life. Indeed, as academics in faculty positions are well aware, AI is significantly changing the landscape of learning and teaching. Due to the speed with which this technology has progressed, the future of education, but also of science, medicine, the arts, and other industries, can at times feel startlingly uncertain. It is this theme—”uncertainty”—that has guided the work of the Uncertainty & AI Group (Un-AI) for the past year.

Born out of a conversation in the spring of 2024 between Associate Dean Grant Nelsestuen and a group of researchers in the humanities at UW–Madison, the aim of Un-AI is to tackle some of the most complex social concerns about the implications of AI from a humanistic perspective. Funded by the College of Letters & Science and IRH seed program “The Humanities Respond to Global Challenges,” Un-AI consists of five faculty across the humanities and social sciences: Devin Kennedy (co-lead, Assistant Professor of History), Annette Zimmermann (co-lead, Assistant Professor of Philosophy), Clinton Castro (Assistant Professor, The Information School), Jeremy Morris (Professor of Media and Cultural Studies), and Alan Rubel (Professor, The Information School).

Prior to the formation of Un-AI, these faculty—many of whom had not even met each other—were already thinking about AI and related issues, such as big data’s consequences on the sciences and society, and algorithmic reasoning systems and their consequences. However, they were doing so from the limited perspective of their own disciplines. Looking to expand their own methodologies and scope of research, these faculty decided to create a formal group, one that is open to all campus members, to foster interdisciplinary dialogue about artificial intelligence.

Due to the sheer quantity of research that has, and is, being done on AI, the Un-AI group needed a way to pair down the interventions they hope to make in this field. For them, investigating the uncertainty surrounding AI was the perfect way to do so. “Discussions about AI tend to presume that its future and trajectory are, to a large degree, inevitable. But history shows us that technology—much like society, culture, politics—is a highly contingent thing, molded by a variety of forces largely uncertain in their time,” Devin and Jeremy explained. “We want to emphasize this sense of uncertainty productively, to question, study, and analyze the real historic and potential human dimensions of these technologies.”

The implications of Un-AI’s endeavors extend far beyond this theme of uncertainty. One message that the group hopes to get across is the immense value that the humanities contribute to AI research. As Devin told us, “Technical fields tend to emphasize ‘how’ AI works and ‘what’ AI can do. The humanities play a crucial role grounding AI in the ‘whether, how, why’ questions of human value. We have so much to offer. Our fields elucidate the meaning of creativity, intelligence, reasoning, and social value that are at play in the design and use of computing systems.” In particular, researchers in the humanities are uniquely positioned to provide ethical analyses of AI, especially those that concern the biases and inequalities that new technologies can perpetuate, and even create.

An overarching goal for Un-AI from the start has been to foster interdisciplinary conversation about AI not just among faculty, but among the UW–Madison community as a whole. To this end, Un-AI played a crucial role in developing a data ethics certificate for undergraduate students. Those enrolled in this certificate track will complete an impressive range of courses on topics in philosophy, computing, statistics, policy, and history. The group also co-hosted Simon Fraser University’s Wendy Chun this spring, who gave the McKay Lecture at the Center for the Humanities. Her presentation “My Mother was a KeyPunch Operator (But She Never Learned to Drive)” combined the interpretative traditions of the arts and humanities (here, personal reflection) with critical work in the data sciences to imagine new engagements with (and resistances to) our data-filled world. As part of Chun’s visit, Un-AI convened a seminar with graduate students on issues in contemporary technology and discussed strategies for building community for interdisciplinary research on AI.

Un-AI also welcomed a project assistant this year: Kenneth Diao (Graduate Student, Data Science and Human Behavior). As PA, Kenneth put together a massive repository of sources on the history of AI from various disciplinary perspectives and digitized hundreds of pages of unique archival materials from the history of computer science at UW–Madison. From these sources, Un-AI hopes to gain insights into the field’s development at the University.

As it enters its second year, the Un-AI Group plans to shift its thematic focus slightly to “responsibility and AI,” and ultimately produce a co-authored paper.  This project will draw on perspectives from philosophy, history of science, and social history to explore how computer scientists and technologists have thought about their responsibility for the things they build, and how they ought to.

The Un-AI Group is eager to continue building community across scholars of varying disciplines. If you are interested in joining the group, please contact Devin Kennedy (dbkennedy@wisc.edu).