As budding data scientists and analytic professionals, we’ve immersed ourselves in the IAA curriculum to develop both our technical and soft skills. We’ve learned that conducting analytics is meaningless without some way to gain insights or inform decisions. In practice, analytics requires a blend of understanding the technical underpinnings and the accurate communication of the results. Cathy O’Neil’s Weapons of Math Destruction provides a great introduction into the ethical considerations associated with analytic practice, and highlights how many analytic implementations don’t consider the ethical aspect.
O’Neil, a data scientist by training, focuses on several algorithms that have been implemented to help inform or automate decisions, and talks about why some of these implementations have severe negative consequences. O’Neil dubs these dangerous algorithms as “weapons of math destruction” (WMD) and they are marked by three specific things: They are opaque, so that those affected cannot clearly see how the model produced the output; they have pernicious feedback loops, so they help reinforce any discriminatory attributes; and they scale, so they have the capacity to affect large numbers of people. While I believe there are gaps in O’Neil’s considerations of the ethical impact of data science, overall she provides a useful framework for reflecting on the everyday implications of our work.
The book is accessible to a wide audience, from the highly technical to the layperson. It can surely have an impact on anyone who wants to understand what algorithms are influencing their life, and introduce a healthy dose of skepticism towards those model implementations. The book is especially important for those of us that will be creating the models, deciding what inputs are going into the models, and understanding how the outputs are used. Much like we have been practicing our ability to translate business problems into analytics, and translate model results into actionable insights, this book provides a framework to think about the ethics of implementation.
An example that O’Neil touches on multiple times throughout the book is a recidivism model used to predict whether an incarcerated individual will end up back in prison after release. The model takes as input several factors that can be used as a proxy for race and class, and judges routinely use the results of the model for sentencing. So effectively this model helps decide how long people will be sentenced based on their race and class, a complete WMD. O’Neil goes on to discuss how the feedback reinforces the prejudice that was built into the original algorithm. As data scientists, part of our role is understanding the inputs to the model, and what decisions that model informs. We should also be aware of the feedback that the model is using to learn, and in the case of the recidivism model, reinforcing prejudices.
There are two places where I think O’Neil misses the mark in classifying WMDs. The first is categorizing a pernicious feedback loop. She points to credit scoring models that lenders use to offer credit, which disproportionately deny credit to poor and minority individuals. I contend that although the feedback from the model may not directly help train future use, the financial feedback that the lending institution receives is very useful. If lenders incorrectly deny credit to minority applicants, they lose revenue to the lending institutions with more accurate models. The models that do offer loans without racial prejudice will serve additional customers and realize additional revenue. The financial feedback rewards the non-prejudiced model. O’Neil may object to the model regardless of its accuracy, simply because lending institutions fail to offer credit to minorities. There are other policy decisions that can entice lending to minorities, but less accurate algorithms are not an effective answer.
The second qualm I have is an error of omission, and that is there are benefits to all of the models and implementations outlined in the book. More accurate credit risk models allow banks to offer lower borrowing rates to credit worthy applicants, and as a result more people will borrow and put the credit to productive use. O’Neil criticizes teacher evaluation models, and rightfully so, but does not mention any of the benefits. Primarily, that teachers are held accountable and poor teachers are more likely to be fired or dissuaded from entering the profession altogether. Our role as data scientists shouldn’t be to dismiss any model that has potential negative impacts, but to understand what those impacts are and inform decision makers how they relate to the benefits.
Weapons of Math Destruction is a great primer for us as upcoming analytic professionals. Cathy O’Neil gives a framework of how to think about the real people that feel negative effects based on the implementation of predictive models, which I think is as important as the business impact on which we tend to focus. I challenge readers to question whether each of the models actually fits the WMD label. I also implore readers to consider what benefits a model can impart, and thoughtfully weigh them against any destructive effects. These broad considerations, along with our technical and communication skills gained at the IAA, will enable us to become proficient and responsible data science practitioners.
Columnist: Evan Wimpey