What if the algorithm is racist?

As computers shift from being helpmates that tackle the drudgery of dense calculations and data handling to smart machines informing decisions, their potential for bias is increasingly an area of concern.

The algorithms aiding such decisions are complex, their inputs myriad, and their inner workings often proprietary information of the companies that create them. These factors can leave the human waiting on bail or a bank loan in the dark.

Experts gathered at Harvard Law School to examine the potential for bias as our decision-making intelligence becomes ever more artificial. The panel, “Programing the Future of AI: Ethics, Governance, and Justice,” was held at Wasserstein Hall as part of HUBweek, a celebration of art, science, and technology sponsored by Harvard, the Massachusetts Institute of TechnologyMassachusetts General Hospital, and The Boston Globe.

Christopher Griffin, research director of the Law School’s Access to Justice Lab, described pretrial detention systems that calculate a person’s risk of flight or committing another crime — particularly a violent crime — in making bail recommendations. A well-functioning system, Griffin said, can potentially reduce racial and ethnic disparities in how bail is awarded, as well as disparities from jurisdiction to jurisdiction.

Jonathan Zittrain, the George Bemis Professor of International Law and faculty chair of the Berkman Klein Center for Internet & Society, which sponsored the event, said the danger of these systems is that the output of even a well-designed algorithm becomes biased when biased data is used as an input.

As an example, Zittrain said any arrest is at least partly a function of decisions by the arresting officer. If that officer is biased and makes an arrest that another officer might not make, then the arrest record can introduce bias into a system.

Also, some systems use input from interviews conducted with the accused. Though questions are standardized to increase objectivity, they can be influenced by the quality of the interview. An unclear answer scored in one way or another can make the difference between a person being detained or going free on bail, Zittrain noted.

Berkman Klein co-director Margo Seltzer, Herchel Smith Professor of Computer Science at the John A. Paulson School of Engineering and Applied Sciences, said her main concern is transparency. Any decision — a loan rejection, for example — should be explainable in plain language, she said.

But the process behind such decisions can be extremely complex and defy easy explanation, according to Cynthia Dwork, Gordon McKay Professor of Computer Science. No less an issue is that private companies value their privacy.

One response, Zittrain said, would be to pass on private contractors who want to keep the inner workings of their products secret. Though hiring a developer to write voting machine software, for example, might be too expensive for a single town or county, regional pacts are a possible solution, particularly in a situation in which accuracy and openness are the highest priorities.

“I don’t know how you would want this farmed out instead of building it in house,” Zittrain said. “I have yet to hear an argument why we would possibly want this to be at arm’s length, particularly if contractors will be able to claim that it’s proprietary.”


This story was originally published in the Harvard Gazette on October 12, 2017.