Motivation – Online contest platforms that target crowdsourcing for open innovation recently offer a wide range of IT-enabled mechanisms for interaction among users and on ideas (Adamczyk et al. 2012). The desired result of a contest is to solicit a small number of excellent winner ideas that need to be identified out of a potentially large set of ideas generated by the crowd. Winner determination is a task usually performed by knowledgeable jury boards or expert committees (Terwiesch & Xu 2008). However, it is time consuming for the members of these committees to sieve through the multitude of all generated ideas. This constitutes the necessity to select a manageable number of quality ideas being examined in detail (Blohm et al. 2013; Di Gangi et al. 2010).
Gap – Organizations experiment with different approaches when it comes to the selection of contest ideas. While some contest platforms include IT-enabled mechanisms for its users so that they can openly assess and evaluate ideas, others exclude these mechanisms and leave the selection task to members of the organization (King & Lakhani 2013). Researchers have conceptualized idea selection and evaluation with predominantly subjectively assessed criteria for idea quality (Dean et al. 2006), such as creativity (e.g. Rietzschel et al. 2010), novelty, relevance and feasibility (e.g. Riedl et al. 2013) or elaboration (Piller & Walcher 2006). While these characteristics focus on the ideas and thus on the value created by ideators on contest platforms, also the interactions on ideas and therefore the value added by the community of users has been acknowledged by prior research (Malhotra & Majchrzak 2014; Zhao & Zhu 2014). However, not only can the community’s interactions be meaningful e.g. to identify user roles (e.g. Füller et al. 2014) or understand sources of motivation on these platforms (e.g. Leimeister et al. 2009), but these interactions can also represent determinants for idea evaluation and selection, which remains to be conceptualized (Majchrzak & Malhotra 2013).
Aim – This planned research strives to investigate the interaction behavior on idea contest platforms in order to explore the effects of comment and revision patterns on the quality of ideas. Specifically, we investigate patterns of feedback interventions (Kluger & DeNisi 1996; Wooten & Ulrich 2014) initiated by the community members with comments and an ideator’s reactions on these feedback interventions by revisions of an already submitted idea and analyze their relationships with idea quality. We expect to contribute to a better understanding of the effects of community usage of IT-enabled mechanisms on idea contest platforms and to the design of automated decision-support that can inform the selection of excellent ideas.
Preliminary results – We examine data of an open idea contest that took place on a web-based crowdsourcing platform and comprised 510 users, 525 ideas, and 932 comments. Ideators revised 139 of the submitted ideas at least once before the contest ended. After the contest terminated, a small team of contest host employees evaluated the ideas submitted and selected 43 ideas as quality ideas. Thus, the measure ‘quality idea’ is a binary variable and was externally assessed. Figure 1 categorizes ideas according to whether or not ideas received revisions or comments and shows that 299 (57%) of generated ideas received comments. The category ‘ideas with comments’ includes more quality ideas (42; 14.0%) than the category of ideas without comments (1; 0.4%). In addition, 26.5% of all generated ideas were revised. In the subset of ideas that had received comments, those ideas that were revised had a higher chance of getting selected (22; 23.7%) than unchanged ideas (20; 9.7%). The evaluation panel selected none of the ideas that remained both uncommented and unrevised.
Method - Beyond this initial descriptive analysis, we plan to conduct a 3-stage qualitative-quantitative analysis approach to test the effects of patterns of comments and revisions on idea quality. We will start with qualitative content analysis on comments and idea revisions. Our coding framework will be deductively-informed with types of feedback investigated in past studies. We will use text mining (Walter & Back 2013) and natural language processing algorithms (Tan et al. 2014) to operationalize some of the identified feedback types, for example positive appraisal or clarification. We anticipate that automated analysis will be limited regarding some of the more complex feedback types, for example outcome feedback. The coding framework for types of content revisions will help us to assess if idea descriptions were extended or reduced. In the second stage, we intend to apply process mining algorithms, such as the ‘flexible heuristic miner’ (Weijters & Ribeiro 2011). This analysis will require data preparation in order to create a process log, where each idea represents a process instance. Within this process instance, the time-stamped comments and revisions will appear in sequence. The algorithm’s result is a heuristic net, which is similar to a process diagram, highlighting the most likely sequences. We will analyze these patterns in depth and describe them. In the last step, we will run hierarchical logistic regression analysis on a second, comparable data set. The patterns identified will represent our predictor variables and idea quality will be used as our response variable. We will control for personality traits and motivation to participate.
Expected findings and implications – We expect to identify several patterns that frequently occur during idea refinement. These patterns will describe certain types of interdependencies between feedback by comments and refinement of ideas by revisions. Moreover, results from hierarchical logistic regression analysis will provide us with those patterns that explain a considerable amount of variance and affect idea quality significantly. These findings will contribute to our understanding of how the community uses IT-enabled mechanisms of contest platforms for improving idea quality and to the design of improved automated decision-support for teams that need to identify quality ideas. Consequently, our findings may not only support decision making (West 2002), but may also potentially automate the selection process at least to some extent. This should render the selection of crowdsourced ideas more efficiently (Kornish & Ulrich 2011) and effectively (Blair & Mumford 2007).