The Uncertainty and AI Group (Un-AI) is an interdisciplinary research team at UW-Madison studying artificial intelligence with the tools of the humanities and social sciences. The group was launched in 2024 through funding from the College of Letters & Science and the Institute for Research in the Humanities seed program, “The Humanities Respond to Global Challenges.”
Activities
Un-AI holds weekly meetings to discuss works-in-progress and new research across the humanities and descriptive social sciences. For information on the group, contact Annette Zimmermann and/or Devin Kennedy.
Un-AI Events
@ 7:00 pm - 8:30 pm
Great Hall, Memorial Union, 800 Langdon St, Madison, WI 53706
@ 5:00 pm - 6:30 pm
Elvehjem Building, 800 University Avenue
About the Group
Addressing the rise of AI is among the most critical challenges facing contemporary societies. Among other things, AI systems threaten to exacerbate social biases; replace and deskill workers; and create deceptive duplicates of human speech, writing, and visual culture. At the same time, new generative AI products open the possibility for expanding the tools, reach, and capabilities of people–supplying, like the calculator or the word processor before it, new means for the refinement and extension of human reasoning and creativity.
Building from emerging cross-disciplinary collaborations on campus, this project endeavors to establish lasting institutional and intellectual linkages to grapple with AI holistically. Drawing from philosophy, media studies, history, the information school, and law, this team grapples with the issues AI raises by starting from its most disquieting feature–the sense of uncertainty surrounding its future trajectory, present reality, and historical contingencies. Born out of a sector that puts a premium on disruption, AI is closely connected to the concept of uncertainty. ‘Moving fast and breaking things’—the motto that continues to drive much of the contemporary technology sector—has encouraged a public perception of AI that oscillates between awe in light of unforeseen technological capabilities and widespread concern about the potentially harmful and unpredictable social impacts thereof. On the one hand, uncertainty linked to AI innovation seems to have straightforwardly negative repercussions: ongoing AI design and deployment do in fact increase the degree to which humanity’s future is on an uncertain trajectory, and there is widespread and intractable disagreement about how best to mitigate and regulate potential harms and benefits associated with AI. But on the other hand, a deliberate appreciation of uncertainty’s positive potential for productive critique can help combat a misguided, overly constricting sense of AI-induced certainty, and thus usefully highlight previously uncharted paths for innovative humanistic inquiry.
Interrogating the tensions created by the interplay of AI and uncertainty opens a scholarly approach that anticipates rather than merely reacts to the rapidly evolving harms and risks linked to the AI innovation du jour. To do so, our group builds necessary synergies across the humanities and descriptive social sciences where the study of AI has tended to be balkanized and siloed.
As the technology sector races to develop and refine AI systems for deployment across a range of fields and industries, we desperately need humanistic research informed by insights from fields like history, philosophy, information studies, law and media and communications, to unpack the uncertainty unleashed by AI technologies.
Group Members
Clinton Castro
Clinton Castro is an assistant professor in the Information School and an affiliate professor in the Department of Philosophy. He specializes in information ethics and fair machine learning. His recent open access book—Kantian Ethics and the Attention Economy (co-authored with Timothy Aylsworth)—argues that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by digital distraction. He is currently working on a series of essays on the foundations of fair machine learning and is excited to be putting these ideas into practice through NIH-funded work with a team of addiction researchers on a project that sets out to understand bias in algorithms used to treat opioid use disorder.
Devin Kennedy (co-lead)
Devin Kennedy is Assistant Professor of History and the Evelyn and Herbert Howe Bascom Professor of Integrated Liberal Studies. His research centers on the history of computer science and digital technology. His first book, Coding Capital: Computing Power in the Postwar US Economy situates the history of computer science within developments in capitalism in the US, tracing how the manufacturing and financial industries molded technology and scientific research towards their needs, and how in turn, computing supported the emergence of a financialized economy. Kennedy’s next projects concern aspects of the history of academic computer science, including a history of computational complexity theory, and a study of concepts of time in computer design.
Jeremy Morris
Jeremy Morris is a Professor in the Department of Communication Arts at the University of Wisconsin-Madison. His research focuses on how emerging technologies like software, apps and artificial intelligence are shaping creative and media industries like music and podcasting. He is the author of two monographs: Podcasting (2024) and Selling Digital Music, Formatting Culture (2015). He has also co-edited of two collections on digital media: Saving New Sounds: Podcast Preservation and Historiography with Eric Hoyt and Appified: Culture in the Age of Apps with Sarah Murray. His research is published in journals such as New Media and Society, Social Media + Society, Popular Communication and more. He is also the founder of PodcastRE.org, a large researchable database for studying and preserving podcasting cultures.
Alan Rubel
Alan Rubel is professor and current director of the UW Information School and former director of the Center for Law, Society & Justice. He is a member of the department of Medical History and Bioethics and a faculty affiliate of the UW law school and Philosophy Department. Professor Rubel’s research interests include information ethics, law, and policy; privacy and surveillance; and bioethics.
Annette Zimmermann (co-lead)
Annette Zimmermann’s research interests cover a range of topics within the philosophy of AI and machine learning, political philosophy, moral philosophy, social and moral epistemology, philosophy of law and philosophy of science. At UW-Madison, Professor Zimmermann is a member of the University’s interdisciplinary cluster in the ethics of computing, data, and information. In addition, Professor Zimmermann is a 2020-2023 Technology and Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University.