Ideenbewertung in offenen Innovationsprozessen
Zusammenfassung der Projektergebnisse
This research program centered on idea evaluation, particularly the quality assessments idea evaluators make with regard to innovative ideas such as ideas for product or process innovations. It investigated how evaluation biases (i.e. deviations between ideas ‘true’ quality and their evaluation) emerge and examined the consequences of idea evaluation on future ideation. We used large-scale field data from four different industries as well as experimental data. This program produced four projects. In project 1, we examined the relationship between hierarchical proximity of idea creators and idea evaluators, and idea evaluation. To that end, we used large-scale field data from a large German manufacturing company’s internal crowdfunding system, which was designed to “democratize” idea evaluation, and enhanced it by online experiments. We found that idea evaluations are distorted by the degree of hierarchical similarity between the ideator and the evaluator, that is, evaluators prefer ideas from ideators who are hierarchically similar to them, as long as they are not rivals. We also found that this bias is stronger for more novel ideas. Interestingly, hierarchy casts a shadow on supposedly democratized evaluation systems. The results of this project have been published. Project 2 was originally meant to explore the effect of social ties and social comparison on idea evaluation. However, given that others made substantial progress in this area in the meanwhile, this project shifted to the role of panel discussion for idea evaluation. Evaluation panels are omnipresent in the evaluation of ideas; yet whether groups of experts produce more accurate evaluations by virtue of discussion and the pooling of information is unclear. Collaborating with the European Southern Observatory, Europe’s primary organization for astronomical research, we examined the effect of panel discussion during their peer review process on reviewers’ ability to assess proposal quality accurately. We found that the evaluations that reviewers cast before discussion are better predictors of proposals’ future impact than those they cast afterwards, indicating that discussion distorts evaluations. We suggest that discussion shifts reviewers’ attention away from proposals’ merit toward their implementation costs. As planned, Project 3 focused on incentives to contribute to distributed evaluation tasks. Collaborating with a large firm in the aviation industry (rather than running lab experiments, as initially planned), we examined managers’ incentive-based response to evaluation errors. Specifically, we demonstrate that when managers overestimate the value of ideas, they reject more ideas in the future to avoid any further observable errors and protect their reputation. We find that managers do not respond to underestimation errors. In net, this study opens up an incentive-based perspective on idea evaluation errors, which is holds much promise particularly in open innovation settings. Project 4 aimed at understanding the link between idea evaluation and subsequent ideation. Here, we shifted the focus to decision-makers’ own idea generation. Specifically, we showed that idea evaluation promotes evaluators’ own idea generation, challenging the conventional belief that idea evaluation tasks crowd out decision makers’ idea generation. The study is currently under review. In summary, the research program offers nuanced insights into the origins and consequences of evaluation biases, and thus contributes to our understanding of idea creation and evaluation processes that involve many different players.
Projektbezogene Publikationen (Auswahl)
-
Distributed decision‐making in the shadow of hierarchy: How hierarchical similarity biases idea evaluation. Strategic Management Journal, 44(9), 2255-2282.
Schweisfurth, Tim G.; Schöttl, Claus P.; Raasch, Christina & Zaggl, Michael A.
