Make It or Break It: The Endogenous Nature of Idea Evaluation in Collaborative Knowledge Creation
Abstract
Many organizational settings involve processes in which individuals create knowledge and information products that are subsequently evaluated and validated by decision-making committees. For example, product development... [ view full abstract ]
Many organizational settings involve processes in which individuals create knowledge and information products that are subsequently evaluated and validated by decision-making committees. For example, product development engineers design product features that are evaluated by production and marketing specialists (e.g., Cardinal et al., 2011). Similarly, many forms of IT-enabled open innovation (e.g., West, Salter, Vanhaverbeke, & Chesbrough, 2014; Afuah & Tucci, 2012) involve a process in which organizations broadcast research and development problems and select among ideas from external problem solvers—as in the case of peer-production communities such as OSS development platforms (e.g., von Hippel & von Krogh, 2003).
Empirical studies exploring such collective knowledge creation settings often analytically separate the objectives of creating knowledge products from their evaluation and selection (Harvey & Kou, 2013; Nonaka, von Krogh, & Voelpel, 2006)—focusing either on how the quality or quantity of knowledge products may be increased, or on how evaluation decisions can be made more accurately and efficiently. Investigating knowledge production and evaluation in isolation offers in-depth insights into how to improve each stage of the process. However, the creation and evaluation stages are rarely sequentially separated and independent in practice. Evaluation activities are often endogenous to and embedded within the creative process (Harvey & Kou, 2013), and generate valuable insights on how to reshape and develop ideas. This suggests an interactive relationship between knowledge creation and evaluation, as well as dynamic cycling between individual and collective work. With few exceptions, these dynamics remain largely unexplored.
This study aims to contribute to a better understanding of the embedded nature of evaluation within the knowledge creation process (e.g., Nonaka & von Krogh, 2009) by focusing on the reciprocal relationship between the creation of knowledge products—including ideas—and evaluation. We do so by exploring how organizationally relevant features of the ideation and implementation process in an online peer-production community interact with the community’s evaluation process. Consistent with prior work (Girotra, Terwiesch, & Ulrich, 2010; Criscuolo et al. 2016), we also focus on the specific attributes of an idea. Specifically, we aim to answer two research questions:
1. How do idea attributes (e.g., complexity, interdependence) and organizationally relevant features of knowledge creation (e.g., individual’s experience, affiliation, reputation, network ties) impact evaluation committee composition and the evaluation process (e.g., number of reviewers, reviewer experience, reviewer effort, quantity and quality of feedback) and its outcomes?
2. How do attributes of the evaluation committee and process (e.g., number of reviewers, reviewer experience, reviewer effort, quantity and quality of feedback) influence idea quality?
The context for this study is the OpenStack OSS community. Studying evaluation in the context of a peer-production OSS development community will enable us to extend previous studies on knowledge creation and evaluation in three ways. First, OSS development platforms are characterized by flat self-organized structures, in which both developers and evaluators (i.e., reviewers) self-select into their respective roles (Puranam et al., 2014). Because reviewers self-select into evaluating a software development task (in contrast to being assigned to it), this setting is particularly suitable for exploring how the “attractiveness” of an idea to the pool of potential reviewers impacts the evaluation process and development success.
Second, the collaborative innovation process in the OSS development community we will study unfolds online and is fully transparent and detailed (von Krogh et al., 2003). This exceptionally detailed data will allow us to empirically isolate features of the ideation and evaluation process and explore interactions among knowledge creation and evaluation activities in a longitudinal and highly detailed way.
Third, extant literature has often black-boxed the committee of evaluators or treated them as homogenous (e.g., Knudsen & Levinthal, 2007). Consistent with Criscuolo et al. (2016), we argue that the composition of reviewer committee is important for understanding how evaluation decisions are made. We utilize detailed data on committee composition to extract novel insights on the implications of evaluation committee for the knowledge creation process underlying collaborative innovation (Baldwin & von Hippel, 2011).
More specifically, our analysis draws on longitudinal data on all developers and idea implementation trajectories for each implementation started and completed between October 2010 and October 2015 from OpenStack. This population contains information on 1,300 unique reviewers performing more than 119,000 reviews. In addition to metadata on the evaluation process (e.g., number of reviewers, number and duration of review rounds, reviewer experience and reputation), the dataset also includes micro data on the content of each review—including detailed feedback on all changes following each evaluation—and the outcome of the review process.
To aid in data interpretation, we have collected qualitative data from various forms of documentation of the platform and interviews with community members (developers) and administrators. Our focus is on software development ideas as the unit of analysis and explore their implementation and evaluation trajectories. We follow prior evaluation studies (e.g., Girotra et al., 2010) in using baseline measurements of implementation quality by calculating initial reviewer ratings after an implementation is completed and submitted for review.
Growth curve modeling (Bollen & Curran, 2006; Raudenbush & Bryk, 2002) will primarily be used (among other analytical techniques) to estimate inter-unit variability and intra-unit patterns of change—i.e., within and between development implementations and evaluation trajectories. Growth curve modeling enables us to systematically assess increasing or decreasing review scores over time in relation to developments in the knowledge creation process. Growth curve modeling also allows us to isolate fixed effects (e.g., initial ratings, mean development trajectories) and random effects (e.g., variance around initial conditions and development trajectories) and so gain deeper insights into the development trajectory. We expect to complete a comprehensive analysis on the collected data set and recognize emerging theories by conference presentation (1 August 2016).
Authors
- Shiko Ben-Menahem (ETH Zürich)
- Yash Raj Shrestha (ETH Zürich)
- Georg von Krogh (ETH Zürich)
Topic Area
Communities: User Innovation and Open Source
Session
TATr2B » Communities: User Innovation & Open Source (Papers & Posters) (15:45 - Tuesday, 2nd August, Room 112, Aldrich Hall)
Paper
Idea_Generation_and_Evaluation_OUI_2016.pdf
Presentation Files
The presenter has not uploaded any presentation files.