Uncertainty and AI Group

The Uncertainty and AI Group (Un-AI) is an interdisciplinary research team at UW-Madison studying artificial intelligence with the tools of the humanities and social sciences. The group was launched in 2024 through funding from the College of Letters & Science and the Institute for Research in the Humanities seed program, “The Humanities Respond to Global Challenges.”

In the summer of 2025, Un-AI was awarded a competitive grant from the National Endowment for the Humanities to open the new Center for Humanistic Inquiry into AI and Uncertainty, an autonomous research center administratively housed within the Institute for Research in the Humanities. The Center is pleased to announce its first annual competition for up to four two-semester Faculty Fellowships for UW–Madison faculty, and one academic-year fellowship for an external scholar, each of whom are working on a project that explores the theme of “AI and Knowledge” from a humanistic (or descriptive social-scientific) perspective. Fellows will undertake full-time research, participate in the Center’s intellectual community (including weekly meetings with the other fellows), collaborate on the publication and dissemination of the Center’s annual whitepaper, and contribute to the Center’s other activities.

For more information and details on how to apply to CHIAIU fellowships, please see below. All correspondence relating to the new Center should be directed to the Center’s Director, Jeremy Morris, at jwmorris2@wisc.edu. 

Activities

Un-AI holds weekly meetings to discuss works-in-progress and new research across the humanities and descriptive social sciences. For information on the group, contact Annette Zimmermann and/or Devin Kennedy.

Un-AI Events

There are no upcoming events.

CHIAIU Fellowships

The Center for Humanistic Inquiry into AI and Uncertainty offers up to 4 fellowships for UW–Madison faculty and 1 fellowship for external faculty annually. These fellowships are generously funded by a grant from the National Endowment for the Humanities along with funding from campus partners.

This is an accordion element with a series of buttons that open and close related content panels.

UW-Madison Faculty Fellowships in AI and Knowledge

The Center for Humanistic Inquiry into AI and Uncertainty is pleased to announce its first annual competition for up to four two-semester Faculty Fellowships for UW–Madison faculty who are working on a project that explores the theme of “AI and Knowledge” from a humanistic (or descriptive social-scientific) perspective. These fellowships will be granted for two semesters: in spring 2026 and fall 2026. Thanks to an NEH grant and the funding it provides toward buy-out of teaching, Faculty Fellows receive their regular salary and $2000 in flexible research funding and are released from teaching duties for spring 2026 and, pending confirmation of UW-Madison’s teaching policies, fall 2026 so that they may undertake full-time research, participate in the Center’s intellectual community (including weekly meetings with the other fellows), collaborate on the publication and dissemination of the Center’s annual whitepaper, and contribute to the Center’s other activities. Please note that the fellowship may not be deferred for any reason.

Thanks to the generous support of the National Endowment for the Humanities alongside robust institutional support from the College of Letters & Science, the IRH, the Office of the Vice-Chancellor for Research, the Graduate School, the Data Science Institute, the Center for Humanities, and the Vice-Provost for Teaching and Learning, these fellowships are prestigious appointments and represent one of the ways the University of Wisconsin – Madison seeks to strengthen opportunities for concentrated humanities research on the questions of AI and its uncertain future.

Individual faculty members who are on the tenure track or tenured at UW–Madison are encouraged to apply. The Center seeks the most exciting research projects from candidates who can contribute to and benefit from the Center’s mission.


The application cycle for 2026 fellowships is now closed. Applications were due on Friday, October 17, 2025. The final notification of the awards will be sent by the week of November 1, 2025.


Applications must be submitted through the “application form” link to Interfolio above. For help using Interfolio, please refer to the link below.

AI and Knowledge Fellowship

The Center for Humanistic Inquiry into AI and Uncertainty is pleased to announce its first annual competition for one external fellowship for a scholar who is working on a project that explores the theme of “AI and Knowledge” from a humanistic (or descriptive social-scientific) perspective. Joining a team of three to four internal fellows studying AI and uncertainty at UW-Madison, the external fellow will undertake full-time research, participate in the Center’s intellectual community (including weekly meetings with the other fellows), collaborate on the publication and dissemination of the Center’s annual whitepaper, and contribute to the Center’s other activities.

Thanks to an NEH AI Humanities research grant, external fellows will be in residence for the full academic year and will be given a $60,000 stipend, plus health benefits (if accepting the award through UW-Madison payroll), office space, and access to university facilities (libraries). Fellows may extend their residency through the following summer on a non-stipendiary basis. However, the fellowship may not be deferred for any reason.


The application for 2026-2027 fellowships is now closed. Applications were due on Saturday, January 17, 2026. The final notification of the awards will be sent by mid-March.


Applications must be submitted through the “application form” link to Interfolio above. For help using Interfolio, or with issues with letters of recommendation or other files, please refer to the documents below.

Graduate Fellowship in Uncertainty and AI

The Center for Humanistic Inquiry into AI and Uncertainty is pleased to announce its first annual competition for ONE two-semester Graduate Student Fellowship for UW–Madison graduate students who are working on a project that explores the idea of “Uncertainty and AI” from a humanistic (or descriptive social-scientific) perspective. This fellowship will be granted for the 2026–2027 academic year. The Graduate Fellow will receive their regular salary, in place of their usual TA/PA/GA position, so that they may undertake full-time research, participate in the Center’s intellectual community (including weekly meetings with the other fellows), collaborate on the publication and dissemination of the Center’s annual whitepaper, and contribute to the Center’s other activities.

Thanks to the generous support of the National Endowment for the Humanities and institutional support from the College of Letters & Science, the Institute for Research in the Humanities, the Office of the Vice Chancellor for Research, the Graduate School, the Data Science Institute, the Center for the Humanities, and the Vice Provost for Teaching and Learning, this fellowship provides a two-semester stipend ($30,614), plus tuition remission, segregated fees, and graduate student health insurance benefits. Shared office space will be provided. Graduate Fellows are expected to participate fully in the intellectual life of the Center by attending the weekly meetings and by working from their office space during the semester(s) that they are funded.


The application for 2026-2027 fellowships is now open. Applications are due on Friday, March 27, 2026. The final notification of the awards will be sent by April 10.

About the Group

Addressing the rise of AI is among the most critical challenges facing contemporary societies. Among other things, AI systems threaten to exacerbate social biases; replace and deskill workers; and create deceptive duplicates of human speech, writing, and visual culture. At the same time, new generative AI products open the possibility for expanding the tools, reach, and capabilities of people–supplying, like the calculator or the word processor before it, new means for the refinement and extension of human reasoning and creativity.

Building from emerging cross-disciplinary collaborations on campus, this project endeavors to establish lasting institutional and intellectual linkages to grapple with AI holistically. Drawing from philosophy, media studies, history, the information school, and law, this team grapples with the issues AI raises by starting from its most disquieting feature–the sense of uncertainty surrounding its future trajectory, present reality, and historical contingencies. Born out of a sector that puts a premium on disruption, AI is closely connected to the concept of uncertainty. ‘Moving fast and breaking things’—the motto that continues to drive much of the contemporary technology sector—has encouraged a public perception of AI that oscillates between awe in light of unforeseen technological capabilities and widespread concern about the potentially harmful and unpredictable social impacts thereof. On the one hand, uncertainty linked to AI innovation seems to have straightforwardly negative repercussions: ongoing AI design and deployment do in fact increase the degree to which humanity’s future is on an uncertain trajectory, and there is widespread and intractable disagreement about how best to mitigate and regulate potential harms and benefits associated with AI. But on the other hand, a deliberate appreciation of uncertainty’s positive potential for productive critique can help combat a misguided, overly constricting sense of AI-induced certainty, and thus usefully highlight previously uncharted paths for innovative humanistic inquiry.

Interrogating the tensions created by the interplay of AI and uncertainty opens a scholarly approach that anticipates rather than merely reacts to the rapidly evolving harms and risks linked to the AI innovation du jour. To do so, our group builds necessary synergies across the humanities and descriptive social sciences where the study of AI has tended to be balkanized and siloed.

As the technology sector races to develop and refine AI systems for deployment across a range of fields and industries, we desperately need humanistic research informed by insights from fields like history, philosophy, information studies, law and media and communications, to unpack the uncertainty unleashed by AI technologies.

Group Members

Clinton Castro

Clinton Castro is an assistant professor in the Information School and an affiliate professor in the Department of Philosophy. He specializes in information ethics and fair machine learning. His recent open access book—Kantian Ethics and the Attention Economy (co-authored with Timothy Aylsworth)—argues that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by digital distraction. He is currently working on a series of essays on the foundations of fair machine learning and is excited to be putting these ideas into practice through NIH-funded work with a team of addiction researchers on a project that sets out to understand bias in algorithms used to treat opioid use disorder.

Devin Kennedy (co-lead)

Devin Kennedy is Assistant Professor of History and the Evelyn and Herbert Howe Bascom Professor of Integrated Liberal Studies. His research centers on the history of computer science and digital technology. His first book, Coding Capital: Computing Power in the Postwar US Economy situates the history of computer science within developments in capitalism in the US, tracing how the manufacturing and financial industries molded technology and scientific research towards their needs, and how in turn, computing supported the emergence of a financialized economy. Kennedy’s next projects concern aspects of the history of academic computer science, including a history of computational complexity theory, and a study of concepts of time in computer design.

Jeremy Morris

Jeremy Morris is a Professor in the Department of Communication Arts at the University of Wisconsin-Madison. His research focuses on how emerging technologies like software, apps and artificial intelligence are shaping creative and media industries like music and podcasting. He is the author of two monographs: Podcasting (2024) and Selling Digital Music, Formatting Culture (2015). He has also co-edited of two collections on digital media: Saving New Sounds: Podcast Preservation and Historiography with Eric Hoyt and Appified: Culture in the Age of Apps with Sarah Murray. His research is published in journals such as New Media and Society, Social Media + Society, Popular Communication and more. He is also the founder of PodcastRE.org, a large researchable database for studying and preserving podcasting cultures.

Alan Rubel

Alan Rubel is professor and current director of the UW Information School and former director of the Center for Law, Society & Justice. He is a member of the department of Medical History and Bioethics and a faculty affiliate of the UW law school and Philosophy Department. Professor Rubel’s research interests include information ethics, law, and policy; privacy and surveillance; and bioethics.

Annette Zimmermann (co-lead)

Annette Zimmermann’s research interests cover a range of topics within the philosophy of AI and machine learning, political philosophy, moral philosophy, social and moral epistemology, philosophy of law and philosophy of science. At UW-Madison, Professor Zimmermann is a member of the University’s interdisciplinary cluster in the ethics of computing, data, and information. In addition, Professor Zimmermann is a 2020-2023 Technology and Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University.

UW shield logo for the College of Letters and Science