Data Science

On Ingesting Kate Crawford’s “The Trouble with Bias”

Domino2018-05-03 | 11 min read

Return to blog home

Kate Crawford discussed bias at a recent SF-based City Arts and Lectures talk and a recording of the discussion will be broadcast, May 6th, on KQED and local affiliates. Members of Domino were in the live audience for the City Arts talk. This Domino Data Science Field Note provides insights excerpted from Crawford’s City Arts talk and from her NIPS keynote for additional breadth, depth and context for our data science blog readers. This blog post covers Crawford’s research that includes bias as a socio-technical challenge, implications when systems are trained on and ingest biased data, model interpretability, and recommendations for addressing bias.

Kate Crawford's "The Trouble with Bias"

Last year, Domino’s CEO, posted a link to Kate Crawford’s “The Trouble with Bias” NIPS Keynote within a company-wide industry news discussion thread. Dominoes across the entire company discuss industry movements daily and some team members also attended Kate Crawford’s recent AI discussion with Indre Viskontas in March. A recording of the City Arts and Lectures discussion will be broadcast, May 6th, on KQED and local affiliates. This blog post covers key highlights from the talk that include the implications when systems have trained on and ingested biased data, bias as a socio-technical challenge, model interpretability (or lack thereof), and how recognizing that data is not neutral as a key step in addressing bias. The post also references Kate Crawford’s NIPS keynote for additional context as well as three additional recommendations to address bias. These additional recommendations include using fairness forensics, working with people outside of machine learning to create an interdisciplinary system, and thinking "harder about the ethics of classification”.

Systems Training On and Ingesting Past Biases: Implications

Early on in the City Arts talk, Indre Viskontas asked Kate Crawford to provide additional context regarding how algorithms have impacted people. Crawford provided redlining as an example of what happens when any system is trained on and ingests data that is based on past biases. She discussed how redlining in the US mortgage industry within the 1930s and 1940s would not give loans to people that lived in predominately African American neighborhoods and how

“we see an enormous transfer of wealth away from black populations to white populations…very profound…and it was a key part of the civil rights movement. And ended up in the Fair Housing Act”.

However, when there are systems that are likely trained upon data sets that still have past biases, then it is possible to see echos of past biases in present day. Crawford also referenced how the Bloomberg report on Amazon’s same day delivery service indicates how “low-income black neighborhoods, even in the center of cities, they weren’t getting same day deliveries…it was almost just like this replication of those old red-line maps”. Crawford indicated that she used to live in Boston and in her neighborhood at the time, there was lots of services, but then

“if you're on the Rock Street side, then no service at all. And it was one of those extraordinary moments where you realize that these long histories of racial discrimination become patterns of data that have been ingested by our algorithmic systems, which then become the decision makers of now and the future. So it's a way that we start to see these essential forms of discrimination become these ghosts that inhabit the infrastructures of AI today.”

This prompted the discussion to shift to debates around neutral data and whether it is possible for data to be neutral.

Neutral Data? A Socio-Technical Problem.

Viskontas indicated that the argument “a computer can’t be biased. It’s objective.” exists. Viskontas also referenced Twitter bots that had learned to be racist from their Twitter environment and asked Crawford, what do “we have to think about in terms of training these algorithms?” Crawford responded that “what to do” is being debated at conferences like FATML, where people are discussing “how are we starting to see these patterns of bias? And how might you actually correct them”. Yet, Crawford also points out that even defining what is “neutral” is not easy, particularly if there are “decades, if not centuries” of history behind the biases and then “what are you resetting to? where is the baseline?” Crawford advocates thinking about these biases and systems as socio-technical. Crawford also discusses biases as a socio-technical issue in her “The Trouble with Bias” keynote, where bias is “more of a consistent issue with classification”.

In "The Trouble with Bias" keynote, she also provides examples of how classification is tied to social norms of a specific time or culture. Crawford cites how a 17th century thinker indicated that there were eleven genders that included women, men, gods, goddesses, inanimate objects, etc. which may seem “arbitrary and funny today”. However,

“the work of classification is always a reflection of culture and therefore it's always going to be slightly arbitrary and of its time and modern machine making decisions that fundamentally divide the world into parts and the choices that we make about where those divisions go are going to have consequences”.

As a modern day comparison for gender classifications, Crawford also indicated in the NIPS keynote how Facebook references fifty-six genders in 2017 compared to referencing two genders, four years ago. Yet, in the City Arts talk, Crawford indicates that recognizing that data is not neutral is a big step in addressing bias.

Model Interpretability or Lack Thereof

Within the City Arts talk, Viskontas asked follow up questions about the potential costly mistakes associated with algorithms trained on biased data and whether there were opportunities to rethink the data that is being fed to algorithms. Crawford responded by discussing model interpretability, citing a Rich Caruana study where they created two systems, one that was not interpretable and one that was more interpretable. The one that was more interpretable allowed the research team to see how they had reached that point. For example, Crawford indicated that they were able to find out why the algorithm was sending a high risk group home who should have been immediately triaged into intensive care. Because the team had an interpretable system, they were able to see that historically, there was an extremely high risk group that was always “triaged into intensive [care], therefore there was no data for [the algorithm] to be learning from” and hence, the algorithm indicated high risk people should be sent home.

According to the Rich Caruana et al. paper, “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission

“In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important….In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed.”

Crawford cited Caruna's research as a reason why people should be “more critical of what data represents” and how it is a “really big step”….”to realize that data is never neutral. Data always comes from a human history.”

Three Recommendations for Addressing Bias: “The Trouble with Bias” NIPS Keynote

While Crawford discussed how data is never neutral in both the City Arts and NIPS talks, it is within the NIPS talk that she puts forth three key recommendations for people working with machine learning to consider: use fairness forensics, create interdisciplinary systems, and “think harder about the ethics of classification”. Crawford defines fairness forensics as testing our systems “from building pre-release trials where you can see how a system is working across different populations” as well as considering “how do we track the lifecycle of training data to actually know who built it and what the demographic skews might be.” Crawford also discusses how there is a need to work with domain experts who are not within the machine learning industry in order to create interdisciplinary systems that test and evaluate high stakes decision making systems. The third recommendation that Crawford advocates for is to “think harder about the ethics of classification”. Crawford indicates that researchers should consider who is asking for the classification of human beings into specific groups and why are they asking for the classifications. Crawford cites how people throughout history, like Rene Carmille, have faced these questions. Carmille decided to sabotage a census system that saved people from death camps in World War II.

Summary

Domino team members review, ingest, and debate industry research daily. Kate Crawford’s research on bias including “The Trouble with Bias” NIPS keynote and the City Arts and Lectures talk dive into how bias impacts developing, training, interpreting, and evaluating machine learning models. While the City Arts and Lectures talk was an “in-person” live event, the recording will be broadcast this Sunday, May 6th, on KQED and local affiliates.

Domino Data Science Field Notes provide highlights of data science research, trends, techniques, and more, that support data scientists and data science leaders accelerate their work or careers. If you are interested in your data science work being covered in this blog series, please send us an email at writeforus(at)dominodatalab(dot)com.

Domino powers model-driven businesses with its leading Enterprise AI platform that accelerates the development and deployment of data science work while increasing collaboration and governance. More than 20 percent of the Fortune 100 count on Domino to help scale data science, turning it into a competitive advantage. Founded in 2013, Domino is backed by Sequoia Capital and other leading investors.

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.