introduction
- thus far, fair-ML research is focused on finding metrics and scores that can reflect the “fairness” of a machine learning algorithm
- by doing this they abstract away the context in which the ML algorithm is going to be implemented
- fairness and justice are properties of social and legal systems, not properties of the tools within them
- therefore, treating fairness and justice as terms that have meaningful application to technology separate from a social context is an abstraction error
the abstraction traps
all of these traps are failure modes that result from failing to properly account for or understand the interactions between technical systems and social worlds
the framing trap
failure to model the entire system over which a social criterion, such as fairness, will be enforced
- algorithmic frame
- choices about representations and labels
- the efficacy of an algorithm is evaluated as properties of the output as related to the input
- data frame
- evaluate not only the algorithm but only the choice of inputs and outputs
- fair-ML researchers try to draw a formal measure that seeks to approximate a socially desirable goal like fairness from the data frame
- it is still an attempt to eliminate the larger context and abstract out the problem of bias
- sociotechnical frame
- a machine learning model is a part of a sociotechnical system → the other components of the system need to be modeled
- example: risk assessment algorithm
an STS lens on the framing trap
adopt a “heterogeneous engineering” approach
- consider both human activities and machine ones at the same time
- draw the boundaries of abstraction to include people and social systems as well
- this task may be enormous
- drawing our black box boundary around at least one technical element and one social element will give better results while still enabling some tractability on the problem