Today, humans interact with automation frequently and in a variety of settings ranging from private to professional. Their behavior in these interactions has attracted considerable research interest across several fields, with sometimes little exchange among them and seemingly inconsistent findings. In this article, we review 138 experimental studies on how people interact with automated agents, that can assume different roles. We synthesize the evidence, suggest ways to reconcile inconsistencies between studies and disciplines, and discuss organizational and societal implications. The reviewed studies show that people react to automated agents differently than they do to humans: In general, they behave more rationally, and seem less prone to emotional and social responses, though this may be mediated by the agents’ design. Task context, performance expectations and the distribution of decision authority between humans and automated agents are all factors that systematically impact the willingness to accept automated agents in decision-making - that is, humans seem willing to (over-)rely on algorithmic support, yet averse to fully ceding their decision authority. The impact of these behavioral regularities for the deliberation of the benefits and risks of automation in organizations and society is discussed.
External Link (DOI)
Also published as: Max Planck Institute for Innovation & Competition Research Paper No. 20-15
Also published as: ETH Zurich Center for Law and Economics Working Paper Series No. 12/2020