Skip to main content
Skip to main menu Skip to spotlight region Skip to secondary region Skip to UGA region Skip to Tertiary region Skip to Quaternary region Skip to unit footer

Slideshow

Online Event-Tina Eliassi-Rad-"Just Machine Learning"

Tina Eliassi-Rad
online via Zoom
Special Information:
The public can access this event at https://zoom.us/j/96760966159

The Department of Philosophy & Institute for Artificial Intelligence present an online lecture by Tina Eliassi-Rad, Associate Professor of Computer Science at Northeastern University in Boston speaking on "Just Machine Learning" on Thursday, April 23 at 4pm.

The public can access this event via Zoom at https://zoom.us/j/96760966159

Tina's work has been applied to personalized search on the World-Wide Web, statistical indices of large-scale scientific simulation data, fraud detection, mobile ad targeting, cyber situational awareness, and ethics in machine learning. Her algorithms have been incorporated into systems used by the government and industry. She currently serves as the program co-chair for the International Conference on Computational Social Science (a.k.a. IC2S2, the premier conference on computational social science). Tina received an Outstanding Mentor Award from the Office of Science at the US Department of Energy in 2010, and became a Fellow of the ISI Foundation in Turin Italy in 2019.

Abstract: Tom Mitchell in his 1997 Machine Learning textbook defined the well-posed learning problem as follows: “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” In this talk, I will discuss current tasks, experiences, and performance measures as they pertain to the use of machine learning in life-altering situations such as pretrial dispositions and loan approvals. The most popular task thus far has been risk assessment. For example, Jack’s risk of defaulting on a loan is 8, Jill’s is 2; Ed’s risk of recidivism is 9, Peter’s is 1. We know this task definition comes with impossibility results (e.g., see Kleinberg et al. 2016, Chouldechova 2016). I will highlight new findings in terms of these impossibility results. In addition, most human decision-makers seem to use risk estimates for efficiency purposes and not to make “fairer" decisions. The task of risk assessment seems to enable efficiency instead of normative qualities such as fairness. I will present an alternative task definition whose goal is to provide more context to the human decision-maker. The problems surrounding experience have received the most attention. Joy Buolamwini (MIT Media Lab) refers to these as the “under-sampled majority” problem. The majority of the population is non-white, non-male; however, white males are overrepresented in the training data. Not being properly represented in the training data comes at a cost to the under-sampled majority when machine learning algorithms are used to aid human decision-makers. There are many well-documented incidents here; for example, facial recognition systems have poor performance on dark-skinned people. In terms of performance measures, there are a variety of definitions here from group- to individual-fairness, from anti-classification, to classification parity, to calibration. I will discuss our null model for fairness and demonstrate how to use deviations from this null model to measure favoritism and prejudice.

Support Philosophy at UGA

The Department of Philosophy appreciates your financial support. Your gift is important to us and helps support critical opportunities for students and faculty alike, including lectures, travel support, and any number of educational events that augment the classroom experience. Click here to learn more.

EVERY DOLLAR CONTRIBUTED TO THE DEPARTMENT HAS A DIRECT IMPACT ON OUR STUDENTS AND FACULTY.