Crowdsourced Idea Convergence

Innovation Funnel

The determination of the best ideas goes through different phases. By clicking on the different parts of the displayed funnel, you can find out the most important information about the four phases of idea selection, namely idea generation, filtering, shortlisting and winner determination. On devices with small screens you can swipe the images horizontally to see the phases.


Switch phase

Shortlisting

Selection vs. Evaluation

After the filtering phase, in which ideas are eliminated with the deployment of hard cut-off criteria, the goal of the shortlisting phase is to perform an in-depth evaluation of the remaining ideas. Shortlisting crowdsourced ideas is challenging, because multiple selection challenges might arise. There exist several techniques that one can adopt to facilitate selection, e.g., categorizing alternatives [6,7,8], partitioning alternatives into smaller sets [5], etc. For example, recent research has found that (1) evaluation accuracy is higher when evaluators choose from a smaller choice set [11], (2) evaluators experience higher cognitive effort and have higher reduction rates when they are prompted towards including ideas [2], and (3) perceive lower cognitive effort and show higher accuracy when choosing ideas from subsets of similar ideas [4]. Here, a particular focus will be given to the use of rating scales for evaluation. Rating scales can be categorized into holistic and analytic scales. Holistic rating scale describe scoring methods, where a single scoring dimension is collected. In contrast, analytic rating scales relate to scoring methods where multiple dimensions need to be evaluated [1]. For example the binary assessment as good idea vs. bad idea is characterised as holistic, whereas the assessment according to quality dimensions, such as novelty and feasibility is characterized as analytic.

Holistic scales

Horton et al. (2016) suggest that instead of awarding cardinal scores to sufficiency criteria, to treat them as pass/fail decisions. They suggest that most early-stage innovation criteria should be possible to answer with a simple yes/no. When multiple evaluators assess ideas, the authors suggest to use medians for aggregating individual evaluations rather than the average. Another advantage of binary assessments is that one can easily compare selection accuracy between different evaluators by using a confusion or error matrix (Figure 1). A confusion matrix compares the idea quality assessment of the crowd with the assessment of a gold standard. In this context, a gold standard refers to the assessments of evaluators, who have domain knowledge on the certain topic. If the crowd rating converges with the experts’ rating then there exists a true positive or negative. On the contrary, if the crowd rating diverges from the experts’ rating, there exists a false positive or negative.

In practical terms, if the crowd rates an idea as of high quality, while the experts rate the same idea as of low quality, the assessment is a false negative. Based on the counts in the confusion matrix, several calculations can be made. For idea selection the following calculations appear meaningful: elimination accuracy, false negative rate and false positive rate. Elimination accuracy indicates how accurate the crowd predicted an idea to be of high or low quality. It measures the proportion of correct predictions (true positives and true negatives) divided by all predictions. The false negative rate indicates the rate of ideas that were falsely rated as high quality, when in fact they were of low quality. If the crowd wrongly rates an idea as of high quality even though it should be rated as of low quality, more resources need to be deployed in the next phase. Contest organizers want to avoid allocating additional financial and human resources in subsequent evaluation activities. Hence, the false negative rate (FNR) should be small to avoid retaining low quality ideas.The false positive rate refers to the share of false positive ratings. In summary, the goal is to have a high elimination accuracy (= high true negative/positive rate) and low false negative/positive rates.


Gold standard
Low quality High quality
Prediction of the crowd Low quality True positives(TP) False positives(FP)
High quality False negatives(FN) True negatives(TN)
Figure 1: Confusion Matrix

Analytic scales


Idea selection can also be facilitated by criteria that allow evaluators to assess several dimensions of an idea. Widely adopted selection criteria are feasibility, novelty, elaboration, and relevance (based on [9]).


References

  1. C. Harsch and G. Martin, “Comparing holistic and analytic scoring methods: issues of validity and reliability,” Assess. Educ. Princ. Policy Pract., vol. 20, no. 3, pp. 281–307, 2013.
  2. I. Boskovic-Pavkovic, I. Seeber, and R. Maier, “Reduce to the Max - How Convergence Nudges Affect Consideration Set Size and Cognitive Effort Abstract,” in Proceedings of the 52nd Hawaii International Conference on System Sciences. Forthcoming paper.
  3. C. Harsch and G. Martin, “Comparing holistic and analytic scoring methods: issues of validity and reliability,” Assess. Educ. Princ. Policy Pract., vol. 20, no. 3, pp. 281–307, 2013.
  4. V. Banken, I. Seeber, and R. Maier, “Comparing Pineapples with Lilikois : An Experimental Analysis of the Effects of Idea Similarity on Evaluation Performance in Innovation Contests,” in Proceedings of the 52nd Hawaii International Conference on System Sciences, pp. 1–10. Forthcoming paper.
  5. A. Chernev, U. Böckenholt, and J. Goodman, “Choice overload: A conceptual review and meta-analysis,” J. Consum. Psychol., vol. 25, no. 2, pp. 333–358, 2012.
  6. T. P. Walter and A. Back, “A text mining approach to evaluate submissions to crowdsourcing contests,” Proc. Annu. Hawaii Int. Conf. Syst. Sci., pp. 3109–3118, 2013.
  7. L. J. Kornish and K. T. Ulrich, “Opportunity Spaces in Innovation: Empirical Analysis of Large Samples of Ideas,” Manage. Sci., vol. 57, no. 1, pp. 107–128, 2011.
  8. O. Toubia and O. Netzer, “Idea Generation , Creativity , and Prototypicality,” Mark. Sci., vol. 25, no. 5, pp. 411–425, 2016.
  9. Dean, D. L., Hender, J., Rodgers, T., & Santanen, E. (2006). Identifying good ideas: constructs and scales for idea evaluation.
  10. Horton, G., Goers, J., & Knoll, S. W. (2016, January). How Not to Select Ideas for Innovations: A Critique of the Scoring Method. In System Sciences (HICSS), 2016 49th Hawaii International Conference on (pp. 237-246). IEEE.
  11. Santiago Walser, R.; Seeber, I.; Maier, R. (2018): What Makes Evaluators Effective? Idea Presentation for Satisficers and Maximizers in Selection Processes of Open Innovation Contests. In: Book of Abstracts of the 16th International Open and User Innovation Conference. August 6-8, 2018, New York City. New York: New York University, pp. 36 - 37.
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Austria License.