Study  |  05/01/2022

Ruled by Robots – How Do Humans Perceive Technology-Assisted Decision-Making?

Algorithms and Artificial Intelligence (AI) have become an integral part of decision-making. Would people prefer to have moral decisions that affect them made by a human or an algorithm? In a new study, this and other questions were investigated in a laboratory experiment.

As technology-assisted decision-making becomes more prevalent, it is important to understand how the algorithmic nature of the decision-maker influences how affected people perceive these decisions. The application of algorithmic aids spans from prediction decisions of various kinds, for example, whom to hire, and what salary to offer, to moral decisions with no objectively correct solution, such as how to distribute a bonus within a team fairly.


The authors Marina Chugunova and Wolfgang J. Luhan (University of Portsmouth) use a laboratory experiment to study the preference for human or algorithmic decision-makers in redistributive decisions. Redistributive decisions can be seen as a type of moral decisions, where the definition of correct or fair depends on the observer’s personal ideals and beliefs. In particular, the authors consider whether an algorithmic decision-maker will be preferred because of its unbiasedness. Defining which decision-maker is preferred and whose decisions are perceived to be fairer can potentially improve the acceptance of decisions or policies, and with it, the compliance.


The Experiment


In the experiment, the main aim was to create a situation where participants’ preference for either a human or an algorithmic decision-maker to redistribute income was observable. First, participants individually earned their initial income by completing three tasks. The three tasks mimicked three potential determinants of income that are central to major fairness theories: luck, effort and talent. Then, the players were matched into pairs and had to choose a decision-maker: either an algorithm or a third party human. The decision-maker decided how to redistribute the total earnings of the pair between the two members. To test the role of bias, a laboratory-induced source of potential discrimination for the human decision-maker was introduced. Then, the participants learned the decision and had to report their satisfaction and their rating of how fair a particular redistribution decision was.


The Findings


Contrary to previous findings, the authors find that the majority of participants ‒ with over 60% ‒ prefer the algorithm as a decision-maker over a human. Yet, this is not driven by concerns over biased decisions of a human. Despite the preference for algorithmic decision-makers, the decisions made by humans are regarded more favorably. Subjective ratings of the decisions are mainly driven by own material interests and fairness ideals. As far as fairness ideals are concerned, the players in the experiment show a remarkable flexibility: they tolerate any explainable deviation between the actual decision and their own ideals. They are satisfied and consider any redistribution decision that follows fairness principles to be fair, even if it does not correspond to their own principles. Yet, they react very strongly and negatively to redistribution decisions that do not fit any fairness ideals.


The Conclusion


The results of the study suggest that even in the realm of moral decisions algorithmic decision-makers might be preferred over human decision-makers, but the actual performance of the algorithm plays an important role in how the decisions are rated. To “live up to the expectations” and increase the acceptance of these AI decisions, the algorithm has to consistently and coherently apply fairness principles.


Directly to the publication of the study:


Marina Chugunova, Wolfgang J. Luhan
Ruled by Robots: Preference for Algorithmic Decision Makers and Perceptions of Their Choices
Max Planck Institute for Innovation & Competition Research Paper No. 22-04