• ABOUT
  • WHAT WE DO
  • PROJECTS
  • BLOG
  • CONTACT
5d6866da677c2a2ecdf006c2 logo p 500

DONATE

Pragya K, April 6 2022

Who is obligated? An inquiry on algorithmic harms

When we’re responding to a harm restoratively, we begin by asking the following questions:

The answers that follow shape the way that we facilitate harm circles and community building activities. Depending on the nature and attributes of the harm, we develop ways for the victim to tell their story, the perpetrator to take accountability, and ultimately, for the harm to be repaired.

But what would happen if we couldn’t answer one of these questions, or if the answer was ambiguous? This is the case with a special type of harm that is becoming increasingly prevalent: algorithmic harms. 

To understand what an algorithmic harm is, let’s look at an example. A 2019 study produced by Berkeley Law found that an automated mortgage lending system charged higher interest rates to Black and Hispanic borrowers than White borrowers for the same loan. 

Here, a marginalized group is being systematically discriminated against by a computer algorithm. Similar biased decisions are being made in other high-stakes contexts, such as healthcare, the justice system, education, and more. 

Now let’s try a restorative approach to responding to this harm. We know that the Black and Hispanic borrowers were harmed. We know they were harmed through unequal interest rates. We know that by speaking to the victims of this harm, we can understand what their needs at this point are. 

But the third and final question is a mystery. Who is obligated? Who is responsible for this harm? 

The algorithm itself? This might be the knee-jerk response. After all, the algorithm evaluated these people’s situations and selected the interest rates without any human input whatsoever. But we can’t really make the claim that the algorithm is responsible for this decision. It does not understand the context of the decision it is making. It does not have the capacity to feel empathy for the people whose livelihood it decides. It is simply following a set of instructions written by a person. 

The engineers who made it? This seems like a logical next step. But it has one glaring flaw: black box models. In this case, the software engineers provided the loaning algorithm with historical data about borrowers and corresponding interest rates. The algorithm then assembled a predictive model based on this data. This model consists of complicated functions of variables that generally can’t be interpreted by humans— not even the ones who created the algorithm! So these engineers do not know how the algorithm is tracing inputs (i.e. the characteristics of the algorithm) to outputs (i.e. the interest rate). This opacity has led these predictive models to be called black box models. Which begs the question: if the engineers don’t know exactly how these decisions are being made, how can they be held responsible when they are biased? 

The people from whom the data is sourced? This is an interesting angle. One of the main reasons why the model might be biased is because the data set it’s trained on is skewed in favor of non-Black and non-Hispanic borrowers. Being a historical record, this data set quantified years and years of biased lending practices performed by human lenders. The model is simply a reflection of this bias. And so it’s not a specific decision-maker, but rather, the systemic discrimination against Black and Hispanic borrowers that’s to blame for this harm. This is a helpful way of understanding how this harm fits into the larger picture of our socioeconomic structure. But to the people who were harmed, this may not be a totally satisfactory answer. It’s in our nature to want to ascribe the blame for a transgression to a single agent, rather than a larger force of some kind. Moreover, it seems impossible— or at least, conceptually difficult— to have a system as a whole represent the “perpetrator” in a restorative circle. 

Currently, much of the discourse about algorithmic harms focuses on how policy and development strategies can be attuned to prevent them from occurring. Which, of course, is greatly important to the goal of rectifying this issue. However, there’s an additional conversation to be had about how to address these harms when they do occur, and how to fully meet the needs of their victims, especially given that there isn’t a defined person or set of people responsible for doing so. 

Written by

Pragya K

Previous The Free Speech Movement
Next The People's Park