Using algorithms to identify resources to combat homelessness, detect the risk of child abuse and determine eligibility for public assistance programs are only some of the ways artificial intelligence and machine learning are being leveraged in public service programs. While these algorithms have the potential to help make systems more efficient, Virginia Eubanks found they actually pose a tremendous risk for poor and working-class families across the United States.

Eubanks, associate professor of Political Science at the University at Albany, SUNY, explored these three uses of algorithms through case studies in different areas of the country, finding shocking disparities between the intended outcomes and the actual impacts these technologies have on poor and working-class people. She documented her work in a new book, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.”

She recently joined the Berkman Klein Center for a discussion of the book and the impact algorithms can have on different segments of society. This has been a topic of interest in BKC’s work on the ethics and governance of artificial intelligence, which spans issues relating to algorithms and justice, including the impact of AI on human rights, as well as the intersection of technology and the public interest.

Drawing on examples from California, Indiana and Pennsylvania, Eubanks argued that while she sees potential for the automating tools and systems for public assistance to have a positive impact, “what we’re actually doing is creating what I call a ‘digital poorhouse,’ an invisible institution that profiles, polices and punishes the poor when they come into contact with public services.”

The poorhouse metaphor Eubanks uses stems from physical poorhouses which were constructed in the 19th century as a way to alleviate poverty. “I believe this is the moment where we decided as a political community that the frontline of the public service system should be primarily focused on moral diagnosis — on deciding whether or not you are deserving enough to receive aid — rather than building universal floors under everyone,” she said. Eubanks argued that tools are never neutral and that bias is now ingrained in new algorithmic systems, despite the intentions of designers, developers, and the people mediating between the systems and people “targeted” by them.

Eubanks briefly described each case study, highlighting that unlike similar studies on algorithms for automating public services, she began her study by talking to poor and working-class families whom she describes as feeling “targeted” by these new systems, rather than the administrators, developers, and decision makers involved in deploying the technology. Eubanks studied an attempt to automate the welfare application process in Indiana, which ultimately resulted in mass confusion and rejected applications.

In Los Angeles, she studied an algorithmic process that creates a vulnerability score for homeless people and seeks to match them with resources, a process that requires participants to offer a considerable amount of personal and self-incriminating information. In Allegheny County, Pennsylvania, Eubanks studied a database collected from public services, including public schools and welfare and public health programs, which used statistical regression to predict child abuse, though the database had disproportionately more information on poor and working-class families.

In each locale, Eubanks contrasted the different perspectives of the algorithms from those who felt “targeted” by the systems, those who developed the systems, and those who were connecting people with the technology. For example, in Allegheny County, poor and working-class families were worried that, because they were the ones using public services and ultimately adding more data into the system, the system might flag them as a false positive. “The families that I spoke to very often said they felt like the system confused parenting while poor with poor parenting,” Eubanks recalled. On the other hand, Eubanks found that the people vetting calls for potential abuse were worried about possible false negatives. “They were concerned about the system not seeing harm where harm might actually exist,” Eubanks said.

Despite a need for more equitable tools, Eubanks acknowledges that creating and implementing such tools is difficult. “Building these systems well is incredibly hard and incredibly resource intensive, and building them poorly is only cheaper and faster at first,” Eubanks said. “I think we have a tendency to think about these tools as sort of naturally creating these efficiencies because the speed of the technology is such that it creates the appearance of faster and easier. But in fact, you really have to know a lot about how these systems work in order to build good tools for them and to interrupt the patterns of inequity that we’re already seeing.”

It’s not only a matter of changing technology that needs to occur, Eubanks argued. There also needs to be broader shifts in thinking about poverty and the poor and working class families in the United States. She cited examples of hardships faced by people across the country, including homelessness without any shelter and being forced to put children into foster care.

“In other places in the world, people see these as human rights violations, and that we see them here as systems engineering problems actually says something very deep and troubling about the state of our national soul. And I think we need to get our souls right around that in order to really move the needle on these problems.”

The book talk is available as a video and podcast on the Berkman Klein website. The Berkman Klein Center Luncheon Series is a weekly forum for conversations about Internet issues and research. It is free and open to the public.