Crystallized intelligence (gc) is a crucial factor in consensual theories of intelligence. In comparison to its ubiquitous importance for learning, academic achievement, and job performance, our knowledge about gc is still limited. The term “crystallized intelligence” was originally coined by Cattell, who subsumed skills, knowledge, and language-related abilities under a broad factor. Even though many theorists follow Cattell’s broad definition with an emphasis on declarative knowledge, gc is usually assessed with highly specific vocabulary measures. One reason for this mismatch is the immense variety of knowledge, which might increase with age. Consequently, it is still unclear whether the profile of gc in adults is highly idiosyncratic, as hypothesized by Cattell, or not. Conceptualizations of gc vary broadly and include unidimensionality, two correlated factors (humanities vs. sciences), three-dimensional models (science, humanities, and civics), or a six-dimensional model with an overarching g-factor. Arguably, the factor structure varies depending on a) sample characteristics such as age and ability, and b) characteristics of the measure, that is, breadth and depth of the knowledge assessment. Previous research on the dimensionality of declarative knowledge is inconclusive. These studies often investigated specific samples, such as Psychology freshmen, using item samples that were small in comparison to the present item pool. In order to administer a very large item pool covering a very wide range of subject areas, we used a smartphone-based approach, which also allowed us to gather data from a more broadly varying sample.
In this study, we investigated responses to more than 4,000 items from more than 30 subject areas, such as technology, art, religion, and law, to examine the dimensionality of declarative knowledge. Data were collected through a mobile quiz app. We recruited participants via the internet, magazine articles, and radio interviews. Participants downloaded the app and worked on random samples of questions. Participation was voluntary without any restrictions concerning duration, daytime, or testing context. We will present results from the ongoing data collection. Specifically, we will compare unidimensional and several multidimensional models discussed in the literature. Preliminary analyses show that multidimensional models had better fit, even though the dimensions were highly correlated. In further analysis we study how moderator variables, such as age and ability, influence the factor structure. Moreover, we discuss advantages of smartphone-based assessment, such as flexible testing conditions and the accessibility of heterogeneous and geographically diverse samples, and its disadvantages, such as lack of experimental control or self-selected samples.
Education , Measurement and Psychometrics , Group differences