Subject archive for "bias"

Data Science

Justified Algorithmic Forgiveness?

Last week, Paco Nathan referenced Julia Angwin’s recent Strata keynote that covered algorithmic bias. This Domino Data Science Field Note dives a bit deeper into some of the publicly available research regarding algorithmic accountability and forgiveness, specifically around a proprietary black box model used to predict the risk of recidivism, or whether someone will “relapse into criminal behavior”.

By Domino14 min read

Data Science

Themes and Conferences per Pacoid, Episode 2

Paco Nathan's column covers themes of data science for accountability, reinforcement learning challenges assumptions, as well as surprises within AI and Economics.

By Paco Nathan30 min read

Data Science

Trust in LIME: Yes, No, Maybe So? 

In this Domino Data Science Field Note, we briefly discuss an algorithm and framework for generating explanations, LIME (Local Interpretable Model-Agnostic Explanations), that may help data scientists, machine learning researchers, and engineers decide whether to trust the predictions of any classifier in any model, including seemingly “black box” models.

By Ann Spencer7 min read

Data Science

Make Machine Learning Interpretability More Rigorous

This Domino Data Science Field Note covers a proposed definition of machine learning interpretability, why interpretability matters, and the arguments for considering a rigorous evaluation of interpretability. Insights are drawn from Finale Doshi-Velez’s talk, “A Roadmap for the Rigorous Science of Interpretability” as well as the paper, “Towards a Rigorous Science of Interpretable Machine Learning”. The paper was co-authored by Finale Doshi-Velez and Been Kim. Finale Doshi-Velez is an assistant professor of computer science at Harvard Paulson School of Engineering and Been Kim is a research scientist at Google Brain.

By Ann Spencer8 min read

Data Science

Learn from the Reproducibility Crisis in Science

Key highlights from Clare Gollnick’s talk, “The limits of inference: what data scientists can learn from the reproducibility crisis in science”, are covered in this Domino Data Science Field Note. The full video is available for viewing here.

By Domino5 min read

Data Science

On Ingesting Kate Crawford’s “The Trouble with Bias”

Kate Crawford discussed bias at a recent SF-based City Arts and Lectures talk and a recording of the discussion will be broadcast, May 6th, on KQED and local affiliates. Members of Domino were in the live audience for the City Arts talk. This Domino Data Science Field Note provides insights excerpted from Crawford’s City Arts talk and from her NIPS keynote for additional breadth, depth and context for our data science blog readers. This blog post covers Crawford’s research that includes bias as a socio-technical challenge, implications when systems are trained on and ingest biased data, model interpretability, and recommendations for addressing bias.

By Domino11 min read

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.