Thursday, March 23 2023, 4pm Peabody Hall, Room 115 John Basl Philosophy Northeastern University Special Information: An online link to this talk is available upon request, contact piers@uga.edu for more information. In this paper, we defend what we call the Interpretability Thesis which states that, in many contexts, decision-makers are morally obligated to avoid basing their decisions about how to treat decision-subjects on the outputs of non-interpretable ("black box") algorithmic decision systems. Others have defended this thesis, typically by arguing that we have duties of transparency to decision-subjects which require us to make certain information available to them. However, this approach to defending the interpretability thesis has been met with skepticism with skeptics worrying about the grounds of these duties of transparency and concerned that we hold algorithmic decisions to higher standards than human decision systems which also fail to meet duties of transparency. We provide an alternative defense of the interpretability thesis grounded in a different set of duties to decision-subjects. We argue that decision-makers have duties of due consideration to decision-subjects. These are duties governing how decision-makers form beliefs about decision-subjects, which decision rules they may permissibly deploy, and which capacities they must exercise in make decisions. After articulating and defending these duties of due consideration, we argue that black box systems often serve as an obstacle to satisfying these duties, and that skeptical responses are unjustified. John Basl is an associate professor of philosophy at Northeastern Institute and Associate Director at Northeastern's Ethics Institute where he leads AI and Data Ethics Initiatives. He works primarily in moral philosophy and applied ethics. He has written on the various issues in the ethics of AI, environmental ethics, and on moral status. His work includes the book "The Death of the Ethic of Life."