Crowdsourced Idea Convergence

Circumplex

Ten different challenges can be explored in detail by clicking the respective challenge in the “challenge circumplex”. By clicking on “See real-life example”, a description of a real-life situation where this problem has actually happened can be read and downloaded.


Similarity of Ideas

Lack of shared understanding of selection criteria

Lack of expertise of evaluators

Divergence of opinions among evaluators

Too many ideas to be processed

Fear of missing out on good ideas

Lack of objectivity

Inadequate amount of idea content

Missing crowd opinion

Misfit of rating scales

I think we should find a way to deal with similar ideas. In the previous challenge, we tried to match individuals with similar ideas and bring them together so that they can integrate and improve their idea. However, I don’t think that this is possible when we discuss about participants from outside our company. Nobody would accept that.

What if there exist many similar and redundant ideas after idea generation?. In such a case, evaluators face the following problems: Ideators might have copied the idea from a fellow participant and ideas are essentially the same. Moreover, a certain aspect of an idea might be good whereas another aspect might be good in another very similar ideas. But, the quality of each idea by its own is low.
In addition, evaluators might face disagreements about the degree of idea similarity. Such challenges among others can increase the time and effort required for selecting the most suitable ideas. Now, evaluators need to find innovative ways how to deal with these problems.

I think the evaluation criteria are very “softly” described.

What if the evaluators have not fully understood the definition or the essence of the selection criteria?
When the selection criteria are too broadly or very strictly defined, the evaluators might face challenges on how to proceed with the evaluation of each idea. Selection criteria like customer value or novelty could include various attributes and do not fully facilitate an optimal evaluation.
Furthermore, the selection criteria might be perceived differently according to the evaluators’ experience, background, expectations, etc. To address these issues, the evaluators should develop adequate strategies to establish a shared understanding prior to idea evaluation.

hum...what is this idea about?

Well, I think this might be good. But I'm not sure or is it borderline?

What if the evaluators do not have the required expertise to evaluate all ideas?
Evaluating ideas requires a certain level of expertise, especially when the ideas require domain-specific knowledge. In many cases, though, it is impossible for the evaluation team to consist of that many experts In these cases, the evaluators might dismiss ideas they do not fully comprehend or request additional time for completing the evaluation.
The evaluation team should take such challenges into consideration and establish a network of experts that could support the overall process and ensure that the quality standards remain high.

I would like to remind you that this is not about forcing your opinion but identifying the best idea. I would propose to control your emotions and reach an agreement.

What if the evaluators have incongruent opinions on which ideas should be further pursued?
When the evaluators are selecting the best ideas might have diverse opinions due to different levels of expertise, lack of shared understanding or personal goals.
Cognitive biases pose another obstacle, as the evaluators might be influenced by their supervisors and feel imposed to agree with a certain opinion. In such cases, the evaluators or the moderator should find the right solution in order to maintain the balance and derive adequate compromises.

We never received more than, maybe, 300 submissions in a competition”, but now we have 5000. We need to come up with a good plan to minimize the effort but still ensure quality.

What if the innovation contest results into numerous ideas for evaluation?
When an innovation contest generates an unexpectedly high amount of ideas, the evaluation process becomes challenging. As human resources are usually very limited, the evaluation team has find efficient ways to deal with the workload.

I’m getting a bit nervous here. We want to make sure that no good ideas were eliminated, so can you guys go back and check the eliminated ideas?

What if the evaluation is so demanding, that the evaluators exclude ideas that have further potential?
As idea contests can result in a large amount of submissions, it can be hard to adequately process every single idea.
The process to assess every single submission in detail requires increasing time, effort and resources. In such cases the evaluators have to deploy efficient and effective strategies that can support the reduction of the ideas while at the same time ensure that all good ideas are included.

These are ideas that were “pushed” for network reasons and not quality reasons, and they should in no way move forward, given that they are completely unrealistic.

What if the evaluators assess the ideas according to their own biases instead of objective criteria?
The involvement of several evaluators in the idea selection process results in conflicting interests. Many evaluators select ideas according to their taste, expertise and personal interests. Thus, the assessment often can be influenced by personal relationships between the evaluators and ideators, certain interest on the domain or other subjective factors.
In such cases, the team has to find alternative ways to minimize the subjectivity in the evaluation process.

Some people pay attention to the instructions and try to understand what they’re asked for, while others are not interested.

What if the idea content does not comply with the contest expectations?
Even in cases, in which the submission requirements are set clear, ideators submit ideas that do not fully meet the expectations. Some ideas are too long and complex, while others are short, poorly written or lack of important characteristics.
Such differences in the content increase the level of complexity and require additional effort by the evaluators.

In that particular case, the topic was too complex for the crowd. We want an idea that not only sounds good, but is also feasible.

What if the crowd has the opportunity to influence the idea assessment?
The submission of numerous ideas often urges the evaluators to think of additional strategies to facilitate an efficient and effective process. In several cases, taking into consideration the opinion of the crowd can be a good solution, as the likes and comments can easily emphasize the strengths and weaknesses of an idea.
Unfortunately, the opinion of the crowd can often be biased or misleading especially when the selection criteria focus on attributes like feasibility or relevancy. To balance the pros and the cons, the team has to find the right techniques in order to make use of the wisdom of the crowd while minimizing the potential biases.

So if you ask three people, you'll probably get different assessments. That is very complex.

What if the selection criteria do not fit the evaluation expectations?
Often the selection criteria cannot fully assist the evaluation process especially when they 1) are not strictly defined, 2) are too many, 3) are too fine-grained or 4) are too coarse-grained.
In such cases the evaluation team has to determine the ideal outcome and decide accordingly the right criteria to facilitate the process.

Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Austria License.