Connect With Us

Recent Updates
« Alumni Spotlight: Joe Croce | Main | First Offering of DC Lab Course »
Thursday
Jun202019

GAB Forum: Reducing Bias in Algorithms

This month's GAB Forum featured Miriam McKinney (Research Data Analyst, JHU Center for Government Excellence) and Andrew Nicklin (Futurist At-Large, JHU Centers for Civic Impact) who gave a talk titled, "Artificial intelligence algorithms manifest our own biases: Reduce harm by mitigating risks." 

Ms. McKinney opened the discussion by asking participants about their experiences with data and algorithms.  The audience agreed that, because people are biased, data are biased, which inevitably means that algorithms are biased.  But, this doesn't mean that we should never use algorthims.  According to Ms. McKinney, "Your algorthims are biased.  What are you going to do about it?"

A useful starting point for identifying and addressing bias is to employ the Ethics and Algorthims Toolkit developed by researchers at JHU and other universities and civic tech groups.  The purpose of the toolkit is to evaluate an algorithm for potential biases.  By working through the toolkit, government officials can pinpoint areas of concern with respect to an algorthim's implementation.  For example, by using the toolkit, officials might come to realize that their data fail to capture a key segment of the population.

As more cities, states and federal agencies adopt the toolkit, researchers will be able to better evaluate and quantify its effect on reducing unintended bias. 

You can view a recording of the event here.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>