Max-Planck-Institut für Innovation und Wettbewerb, München, Raum 313
An increasing number of research projects successfully involves the general public (the crowd) in tasks such as collecting observational data or classifying images to answer scientists’ research questions. Although such crowd science projects have generated great hopes among scientists and policy makers, it is not clear whether the crowd can also meaningfully contribute to other stages of the research process, in particular the identification of research questions that should be studied. We first develop a conceptual framework that ties different aspects of “good” research questions to different types of knowledge. We then discuss potential strengths and weaknesses of the crowd compared to professional scientists in developing research questions, while also considering important heterogeneity among crowd members. Data from a series of online and field experiments has been gathered and is currently analyzed to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence a crowd’s performance in generating research questions. Our results aim for advancing the literatures on crowd and citizen science as well as the broader literature on crowdsourcing and the organization of open and distributed knowledge production. Our findings have important implications for scientists and policy makers.
Ansprechpartner: Michael E. Rose, Ph.D.