Thursday, April 25 2024, 4pm Peabody Hall, Room 115 Clinton Castro Department of Philosophy University of Wisconsin-Madison Clinton Castro's web page Special Information: To view via Zoom, contact piers@uga.edu We argue that students have moral reasons to refrain from using chatbots such as ChatGPT to write certain papers. We begin by showing why many putative reasons to refrain from using chatbots fail to generate compelling arguments against their use in the construction of these papers. Many of these reasons rest on implausible principles, hollowed out conceptions of education, or impoverished conceptions of human agency. They also overextend to cases where it is permissible to rely on a machine for something that once required human cognition. We then give our account: you have a moral obligation to respect your own humanity—i.e., your capacity to set and pursue your own ends—and the process of writing a humanities paper is important for the cultivation of your humanity. We conclude by considering objections and offering replies. In the end, we argue that the moral reasons students have to refrain from letting chatbots do their writing hinge on instructors’ ability to make writing assignments worthwhile. This relies on instructors having the right kind of institutional support, which sheds light on implications that the duty has for administrators, legislators, and the general public. Clinton Castro is an assistant professor in the Information School at University of Wisconsin-Madison. He specializes in information ethics and fair machine learning. His recent philosophical projects include Kantian Ethics and the Attention Economy—co-authored with Timothy Aylsworth—which argues that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by digital distraction. He is also working on a series of essays on the foundations of fair machine learning and is excited to be putting these ideas into practice by working with a team of addiction researchers on an NIH-funded project that sets out to understand bias in algorithms used in treating opioid use disorder.