
Algorithmic decision-making systems applied in social contexts drape value-laden solutions in an illusory veil of objectivity. Machine learning plays an increasingly prominent role in mediating institutional decisions in everything from corporate hiring practices to criminal sentencing. This ongoing AI spring has invigorated discussions of the ethical dimensions of these techno-social arrangements. In particular, there is a growing awareness of that algorithmic-decision making can lead to discriminatory outcomes.
Many researchers have convincingly argued that machine learning systems learn to replicate and amplify pre-existing biases of moral import found in training data. These biases are often rooted in what computer scientist and Black-in-AI co-founder, Timnit Gebru, has called “runaway feedback loops.” Algorithms learn to predict future outcomes based on exposure to many example data. When the data used to train these systems reflect existing societal biases and inequalities, machine learning tends to exacerbate these problematic outcomes. Their predictions then lead to decisions which reproduce societal biases and thus generate even more slanted data on which future models might be trained.
This is an invaluable critique that deserves even greater attention than it currently enjoys. And yet, these lines of critique also permit a strategic retreat for those who, nevertheless, maintain that algorithms themselves are value-neutral. Proponents of the value-neutrality of algorithms argue that, while the existence of algorithmic bias is undeniable, such bias is merely the product of bad data curation practices.
On such a view, eliminating biased data would obliterate any values embedded in algorithmic decision-making. This position can be neatly summarised by the slogan “Algorithms aren’t biased, data is biased”. This reflects an all too familiar evaluative attitude towards technology in general. Such an attitude maintains that technological artifacts are morally and politically neutral. Technologies are mere instruments. Only their uses have moral or other value, not the technology itself.
Training an algorithm
However, adopting this attitude towards algorithms is misguided. I want to suggest that algorithmic decision-making systems necessarily incorporate the values and preferences of particular stakeholders. To see why, we must first note that these systems are designed with an eye towards making useful predictions. Machine learning enables us to tackle problems where the complexity of the situation makes things too difficult for humans to easily and reliably predict future outcomes and which are not analytically tractable for a fixed programme.
But this also means that many of the problems we wish to solve with machine learning must be operationalised in particular ways to make them amenable to quantitative methods in the first place. Training an algorithm requires precisely specifying a target parameter to be optimised as a function of a range of selected input features. Moreover, training involves optimisation, which requires either minimising an error function or maximising an objective function by iteratively adjusting a model’s parameters. The objective function represents the quality of the solution found by the algorithm as a single real number. Training an algorithm thus aggregates countless indicators of predictive success into a single, automatically generated, weighted index.
However, deciding to operationalise a particular goal in this way is itself a value-laden choice. This is because many qualities we want to predict are qualitative concepts with multifaceted meanings. Such concepts like “health” or “job-applicant-quality” lack sharp boundaries and admit plural and context-dependent meanings. The initial model design process depends on a host of decisions about how to conceptualise a given target. This involves steps like defining the kind of task and selecting which features are relevant for prediction and which can be excluded as irrelevant or redundant with minimal loss of information.
Representational decisions
These kinds of design choices amount to what we might call “representational decisions”. These are decisions about what entities to represent and how to represent them in a given model or any other representational device. Representational decisions introduce distinct hazards into the design process. Given that algorithms are designed with a particular purpose in mind, any given choice might turn out to be inadequate for purpose, either for the designers, users or decision subjects. Such hazards risk harms in the form of whatever undesired consequences may follow from making an inadequate representational decision. Managing acceptable risk, and therefore making representational decisions in the design, training and testing of algorithms, depends on ethical values concerning the potential risk of inadequacy for purpose.
Collapsing human concepts into a quantifiable scale of predictive success flattens out their quality dimensions. This process is often underdetermined and arbitrary, but convenient for enterprises that rely on precise and unambiguous predictions. Hence, the very choice to use an algorithm in the first place reflects the values and priorities of particular stakeholders.
The upshot of this is not that we should abandon machine learning and any legitimate progress we might hope to generate using algorithms. Rather, I hope to suggest that we should modify our evaluative practices to take stock of the ways that algorithms themselves depend on the values and preferences of particular stakeholders. If our moral and political language for evaluating algorithms includes only categories having to do with data, pre-existing bias and specific use cases, then we will be blind to many important aspects of our practical situation.
We must attend to the ways that the very meanings of the designs and techno-social arrangements of algorithms are shaped by and encode human values. In some cases, we may want to ask ourselves whether an algorithm is an appropriate tool that is fit for purpose in the context of a given problem and whether such an approach is consistent with the future we envision for ourselves.
*Phillip Hintikka Kieval [2021] is doing a PhD in History and Philosophy of Science. He will be speaking at this year’s Gates Cambridge Day of Research on Saturday which features presentations and micro talks from 29 Scholars covering categories ranging from democracy, elections and politics to cancer biology. We will be running blogs on several of the topics over the next weeks.
**This article also references Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2021. “Algorithmic Fairness: Choices, Assumptions, and Definitions” and Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology, University of Chicago Press.
***Picture credit: Defense Advanced Research Projects Agency (DARPA) and Wikimedia commons.