Learning, Memory, Knowledge, & Intelligence
Notions that individuals differ in 'general knowledge' are sensible
Intelligence is related to things like vocabulary and general knowledge or crystallized intelligence, with fluid and crystallized intelligence correlating at ~+0.8 [1]. This is occasionally marshalled as an attack on IQ; some [2 & 3] for example have argued that the general intelligence factor (g) reflects little more than the cultural content of various WISC subtests. Many would dismiss the notion that people differ in such a thing as ‘general knowledge’ out of hand, arguing that the usefulness of knowledge is context-dependent and that different people will have different interests which are useful for different things. Those who wish to litigate such a complaint whenever the subject comes up have more ammunition than they realise; people who score highly on IQ tests tend to be interested in different things than the average person. For instance, in terms of the RIASEC framework for classifying individual differences in interests, a recent k = 27, n = 55,297 meta-analysis [68] found that IQ correlated at
r = +0.20 with ‘realistic’ interests, r = +0.25 with ‘investigative’ interests, r = -0.02 with ‘artistic’ interests, r = -0.16 with ‘social’ interests, r = -0.07 with ‘enterprising’ interests, and r = +0.01 with ‘conventional’ interests.
Nonetheless, there are a couple of reasons we may imagine an intelligence-knowledge correlation might genuinely exist for reasons which aren’t specific to any one sphere of knowledge:
Education: Different people get different amounts of education and are exposed to different amounts of information.
Curiosity: Some people are more curious than others and have a greater thirst for new ideas or become bored with old ideas more quickly.
Learning: Some people learn more quickly than others.
Memory: Some people forget what they’ve learned more quickly than others do.
1. Education:
The most recent meta-analysis on the causal effect of an extra year of education on IQ scores [4], a great, large, well done meta-analysis, found increases of at most 5 IQ points per year. The paper doesn’t merely look at the correlation between IQ and grades or years of education, but rather, it analyses three different types of quasi-experimental studies to see what causal effect schooling has on the IQ scores of individual people. No signs of substantial publication bias were discovered in the meta-analysis. The fadeout effect of IQ gains seen in early intervention / ‘Head Start’ programs [5] was also replicated in the new meta-analysis [4]; the effect size for the smallest age gap between retesting was a gain of ~2.4 IQ points while by contrast, the effect size for the largest age gap between retesting was a gain of ~+0.3 IQ points.
There is however an important distinction to be made. IQ tests exist to measure a latent underlying general mental ability (g), but the tests and the latent variable are not necessarily same things. We know from a k = 59, n = 84,828 meta-analysis [6] for example that IQ, measured first, predicts later educational attainment (years of schooling) measured at least 3 years later at r = +0.56. However, we also know from a
k = 12, n = 60,993 meta-analysis [7] that in spite of this substantial causal effect of IQ on educational attainment, there’s not much of a Jensen effect on the raw correlation between IQ and educational attainment [7]; there is only an r = +0.13 correlation between IQ subtests’ g-loadings and the degree to which a subtest correlates with educational attainment [7]. We should expect g to be the predominant driver of the causal effect of IQ on education, and if the causal effects of education on IQ were also on g, then we should expect such effects to also contribute to a Jensen effect on the association between the two, but this doesn’t appear to be the case.
We also know that:
The IQ gains from cognitive training [8] and re-testing [9] are not on g.
Score gains on IQ subtests cause decreases in g-loadings for the subtests that the gains apply to [9 & 10].
So then, we may expect that for educational duration too, if we were to investigate these causal IQ gains, we would find them to be g-hollow. Indeed, one paper [11] used structural equation modelling on an extremely longitudinal sample (~60 year gap) to see if the effects of education on IQ are actually on g. The first model tested was that extra education was purely associated with increases in g. The second model was that extra education was associated with increases in g as well as other, more specific abilities. The third model was that extra education was only associated with IQ through specific abilities rather than g. The authors found the last model was the best fit. They also ran other analyses to confirm these results; no matter what, the third model of education having no impact on g was the best fit:
These results [11] have since been directly replicated [12]. Similar results were also found in a study [13] which took longitudinal data on education and IQ and tested if the gains were associated with performance increases on various reaction time tests. The authors found that the effects of education were not on reaction times after controlling for a number of variables. The authors argue that this does not tell us if the education gains are on g or not [11], but the effect of education on reaction times after controlling for other variables was larger on simple reaction times than on choice reaction times, which is the more g-loaded test [14]. Similarly, we can test this by seeing whether or not fluid intelligence is increased by education. Fluid intelligence has to do with reasoning abilities whereas crystallized intelligence is the accumulation of knowledge and skills over time. One study of about 1,367 eighth graders in Boston public schools found that while schools were able to increase the achievement test scores in the schools, the programs for the former were not able to increase fluid intelligence skills like working memory capacity and information processing [15].
Other path analysis models also are also consistent with variation in g being causal for differences in educational achievement rather than the other way around. These are pretty straight-forward studies. Basically, they take data on IQ and abilities at two points and do a cross-lagged panel analysis. They take a cross-lagged path from g at time 1 and educational achievement at time 2 and another path from educational achievement at time 1 and g at time 2. They compare these and make a causal inference based on which is stronger. Both of the studies done on this show the path of g to educational achievement to be stronger than the reverse and that the other is statistically insignificant [16 & 17].
Educational Quality:
Perhaps educational quality is what matters rather than the raw number of completed years of schooling. Probably not; voucher studies where a random selection of poor kids are sent to prestigious schools to be compared to poor kids who happened to not receive a voucher, which is thus an apples to apples comparison, find school quality to have minuscule, non-existant, or even negative effects on school test scores:
The Cleveland Voucher Program [18]:
The Milwaukee Voucher Program [19]:
The Washington DC Voucher Program [20]:
The Moving To Opportunity Experiment [21]:
In this experiment from 1994-1997 in Boston, Baltimore, Chicago, Los Angeles, and New York, 4,248 families (with over 5,000 children aged 6-20) in section 8 public housing applied for a housing voucher program to move to their kids to wealthier neighborhoods with better schools. The applicants were randomly assigned to either a control group (n = 1,310), a treatment group (n = 1729), or a section 8 group (n = 1,209), and assessed 4-7 years later for performance on the Woodcock Johnson academic achievement tests; the difference between the two treatment groups is that vouchers received by the ‘treatment’ treatment group could not be used in a neighborhood with a poverty rate greater than 10%, but vouchers received by the ‘section 8’ treatment group could only be used to rent housing under the standard terms of the US section 8 housing program. Such an experiment should not only capture the effects of going to better-funded schools with better average test scores, but of living in the more education-oriented cultures of the neighborhoods of the children who attend these schools.
Treatment effects on behavioural problems, and especially on test scores, were minuscule and statistically insignificant across the board:
In the subgroup analyses, Blacks experienced a marginally-significant effect in terms of reading scores, at 8 hundredths of a standard deviation (this effect is reported to be 8 tenths of a standard deviation elsewhere in the text of the report due to a typo), but many subgroup analyses were done, and this result would not have survived corrections for multiple testing. Moreover, effects are reported in a manner which adjusts for the fact that voucher recipients differ in their intention to take advantage of the voucher; this introduces a bias similar to what would be seen if instead of comparing voucher recipients to non-receiving applicants, the voucher recipients were compared to non-applicants. Results are thus upwardly biased.
The study [21] reports that the outcome differences between groups were measured in terms of ‘school achievement tests’, but said achievement tests were actually the Woodcock Johnson ‘achievement tests’. This makes the results of the study especially useful to our purposes because the Woodcock Johnson tests of school achievement are largely just a measure of g. For example, in one nationally representative, n = 4,969 sample [22], the general factor underlying the Woodcock Johnson achievement tests correlated at +0.83 with the general factor underlying the Woodcock Johnson tests of general mental ability. This average +0.83 correlation also increased with age from +0.77 to +0.94 [22].
All in all, if general educational attainment is to be taken as having a generalized effect on knowledge, then rather than education being an influence on the general intelligence factor (g), education would be a mechanism by which g increases knowledge in all areas.
2. Curiosity:
One of the most robust associations between intelligence and personality is between intelligence and openness to experience [23 & 24]. Specifically, the relationship between the two variables appears to be predominantly driven by the association between intelligence and the ‘Ideas’ & ‘Actions’ facets of openness [25], with ‘Ideas’ being the aspect which some may characterise as being the aspect of openness which most resembles ‘curiosity’ or ‘openness to ideas’ or ‘need for cognition’. Different teams of researchers will occasionally stumble upon this finding independently. Some for example have devised measures for individual differences in what they thought of as being a trait of “need for cognition” (NFC), and such measures correlate with fluid intelligence as about +0.25 [26] to +0.40 [27]. Of course, this “need for cognition” trait is largely just encompassed by openness, with it correlating with opennes at +0.41 [26], and with the ideas aspect in particular at +0.67 [26].
Given this, it should come as no surprise that along with general mental ability, openness is also associated with general knowledge [28, 29, 30, & 31], with openness even having a fair amount of incremental validity beyond IQ for predicting general knowledge [28, 29, & 31]. However, rather than general knowledge being causal for openness to experience by means of exposing people to topics of potential interest, it appears that A) openness is more related to fluid intelligence than to crystallized intelligence [26 & 27]; and that B) as previously discussed, greater quality education and greater durations of education do not increase intelligence by means of, among other things, exposing students to more information and exposing students to better teachers who inspire their interests in various topics.
It’s likely that rather than a “need for cognition” being causal for intelligence, more intelligent people experience intellectually-demanding activities as being less draining and thus as more enjoyable to engage in. For example, one twin study [32] found in a cross-lagged path design that A) reading achievement at age 10 predicted independent reading at age 11 while the path from independent reading at age 10 to reading achievement at age 11 produced non-significant results; B) independent reading had a heritability of 62% for 10 year olds and 55% for 11 year olds; and C) the path from reading achievement at age 10 to independent reading at age 11 was substantially-genetically mediated. It seems fair to characterize independent reading as a measure of NFC in the reading domain, and as such, this suggests that ability causes NFC rather than NFC causing ability in this domain. If we were to generalize these findings to other domains we’d say that NFC and openness in general are not causal for intelligence. If this were to be the case, then it demands explanation as to why openness has incremental validity beyond IQ for predicting general knowledge, as this is a contradiction.
Consistent with the idea that NFC has to do with the ability to bear the burden of engaging with intellectually-effortful tasks is the finding that despite conscientiousness (encompassing things like “competence”, “order”, “dutifulness”, “deliberation”, “achievement-striving”, and “self-discipline” [25]) usually being unrelated [23] or even negatively related [25] to intelligence, conscientiousness is significantly positively associated with NFC [26]. That is, conscientiousness is associated with NFC because NFC has to do with one’s propensity to engage in intellectually-effortful tasks and because conscientiousness has to do with one’s ability to engage in generally-effortful tasks, but conscientiousness is not associated with intelligence because causality goes from intelligence to NFC rather than from NFC to intelligence.
On the whole, it seems that the river of causality is such that fluid intelligence is causal for both NFC and general knowledge, and that NFC is a mechanism by which fluid intelligence influences general knowledge and crystallized intelligence. The twin study [32] gives some contradictory results, but given the weaknesses of cross-lagged path models for assessing causality [69], and given the other lines of evidence, I’d reckon that NFC is indeed a mechanism by which fluid intelligence affects crystallized intelligence.
3. Learning:
Measuring individual differences in learning ability is a little bit tricky. Let’s say that we have a test we want to use to assess learning where somebody is asked 10 questions before being given some time to train for the test, after whence the degree of improvement since the pre-test is considered the degree of learning. If one person is higher in general knowledge than another and for this reason scores higher on the pre-test, then they will end up with a lower degree of learning than the ignorant person even if they score identically on the post-test. For this reason, the research literature is typically concerned with measuring what it calls ‘Time To Learn’ (TTL); the method by which TTL is typically measured is to divide people’s degrees of learning by the amounts of time they take to achieve their respective learning gains; the idea is to measure the amount of time per ‘unit of learning’. Alternatively, when a sufficiently long and esoteric test is available, TTL can occasionally be measured as the number of ‘units of learning’ achieved in a set amount of time because most people won’t know all of the test content and won’t even be able to learn all of it within the allotted time.
A comprehensive meta-analysis of all reported correlations in the literature would have little scientific value, as the various studies’ effect sizes do not represent estimations of a single true value in the way that, for instance, various estimates of the size of the earth might. The correlations are so complexly-influenced by so many conditions, many of them not explicitly outlined in the writeups of the studies that produced them or perhaps not even known to the authors or to anybody else, that any particular correlation coefficient, or even the mean of them all, isn’t very meaningful. Even measuring TTL has its own problems: Performance increases tend to eventually asymptote, and when this happens, each ‘unit of learning’ tends to take longer to achieve than the last. This introduces the problem of having to correct learning rates for differences in where people are on their respective learning curves, which is especially hard to do with a small number of re-testing trials, and this becomes all the more difficult if individuals differ in their learning capacity such that their training asymptotes at different performance levels. Anybody with higher initial test performance would be harmed by such effects even if they’re a theoretically-faster learner. Generally, measures of individual differences in learning aptitude are not as highly developed as measures for individual differences in cognitive abilities. There is little standardization, there is greater measurement error due to the greater number of parameters that need estimation and due to the complexity of said parameters, there is a great deal of genuine item specificity, and data for large numbers of test items are difficult to obtain because of the large time investment participants need to give to each item. There’s also another basic problem in measurement where learning tasks permit learning transfer from somewhat different but related past learning; really, there’s no hard measurement distinction between “learning” and the philosophical categories of "reasoning ability” and “learning transfer” unless “learning” is defined such as to be limited to being a memory phenomenon which has only to do with the initial memory encoding process and which is measured by things like habituation or rote learning.
The simple correlational literature on the relationship between intelligence and learning has painted a picture which is rather inconsistent and which contributes little to our understanding.
Nonetheless, a variety of detailed reviews exist [37, 38, 39, 40, 41, 42, & 43], and there are a few empirical observations which seem to pop out at us:
The vast majority of correlations are above zero such that general mental ability is positively associated with the various learning parameters whenever higher scores indicate superior performance.
The g-loading of learning tends to increase with complexity. Concept learning for example is more g-loaded than rote learning [38].
Learning is more g-loaded when it “is intentional and calls forth conscious mental effort” [44 & 45].
Learning is more g-loaded when “the learning or practice trials are paced in such a way as to allow the subject time to think” [44 & 45].
Learning is more g-loaded when “the material to be learned is hierarchical in the sense that the learning of later elements depends on mastery of earlier elements” [44 & 45].
Learning is more g-loaded when “the material to be learned is meaningful in the sense of being related to other knowledge or experience already possessed by the learner” [44 & 45].
Learning is more g-loaded when “the learning task permits transfer-from somewhat different but related past learning” [44 & 45].
Learning is more g-loaded when “the learning is insightful, that is, it involves ‘catching on’ or ‘getting the idea’” [44 & 45].
Learning is more g-loaded when “the material to be learned is of moderate difficulty and complexity, in the sense of the number of elements that must be integrated simultaneously for the learning to progress” [44 & 45].
Learning is more g-loaded when “the amount of time for learning a given amount of material to a specified criterion of mastery is fixed for all students” evenly [44 & 45].
Learning is more g-loaded when the learning material is positively age-related, that is, some kinds of material are more readily learned by older children than
by younger children [44 & 45].Learning is more g-loaded when “performance gains are measured at an early stage of learning something ‘new’ than at a late stage of practice on the same task” [44 & 45].
Much of these things make learning, especially its most highly g-loaded forms, an especially-personal experience to each person which doesn’t lend itself well to standardization or measurement, let alone tests of hypotheses to be assessed from said measurements.
Ignoring this and only doing simple analyses looking at post-training performance controlling for pre-test performance, one k = 22 meta-analysis [33] found IQ to predict responses to reading instruction at only r = +0.17, and that IQ had little incremental validity for predicting learning outcomes beyond other “constructs closely related to reading”. It’s likely fair to assign credit to IQ as being the source of these other abilities however, and in turn, it’s likely that the other abilities would have similarly-minuscule incremental validity for predicting learning outcomes beyond IQ, as the same authors did another meta-analysis looking at various other “baseline learner characteristics” and found them to predict pre-test performance controlled for post-test performance similarly low at r = +0.15 [34]. Indeed, in The g Factor [35] (pp.276-277), Jensen reanalyzed data from two large-scale USAF studies with a correlated vectors approach and found that g was almost entirely (r = +0.82 & +0.87) responsible for the correlations between ASVAB subtests and a measure of learning scores [see also 35, p.303, Footnote 25]. Whatever traits are responsible for the ability to predict learning outcomes however, in both cases, the ability to predict responses to training is likely attenuated by the rather-narrow outcome measures:
g:
To an extent, different things will be of random difficulty for different people to learn. Just as in standard tests of cognitive abilities where there is a great deal of item specificity which is not informative as to how somebody will perform on any other item (e.g. Bob knows a random fact about turtles that he happened to overhear one day, but he’s an otherwise-dull person and never studied marine biology), new pieces of information may for example be able to latch onto this random item specificity if an element of it is similar and familiar. For this reason, employing overly-narrow measures of learning outcomes should attenuate any association between intelligence and learning ability, as they capture a great deal of random item specificity which would cancel itself out if aggregated with larger numbers of test items. The best approach would be to take a variety of learning tasks, and factor analyse TTL, gain scores, and differences in performance levels at the asymptotes for the various tasks.
This has been done. John B Carroll for example, in his factor-analytic explorations of the correlational structure underlying an incredibly-diverse suite of cognitive ability tests [36], reviews some of the factor analytic literature on individual differences in learning ability in chapter 7 (pp.284-292), and also runs up against learning factors occasionally in his investigations of memory, as something must be learned before it can be forgotten. He concludes (p.302) that A) there is some evidence for the existence of differentiable first-order learning factors which have specificity to particular kinds of learning situations; but that B) mirroring other more direct cognitive abilities, although there are item specificities, there is a higher-order general factor underlying performance across items and across first-order learning factors and across learning situations, and this general learning ability substantially loads on other higher-order cognitive ability factors, particularly fluid and crystallized intelligence. A variety of other reviewers of the evidence pertaining to this question have reached the same general conclusion [37, 38, 39, & 40]: No general learning factor exists independently of general mental ability; factor scores derived from the general factors of each domain are about as highly correlated as their statistical reliabilities permit.
Various contemporary intellectual assessments are increasingly incorporating learning ability into both their theoretical foundations, and their measurements [66]. Forefront among them, based in Cattell-Horn-Carroll Theory, is the Woodcock Johnson IV which now delineates a learning ability group factor from a memory retrieval speed group factor [66, pp.85-87]. They note [66, pp.86] that this learning factor loads on g to an unexpectedly-high degree at +0.95 in the WJ-IV standardization data.
Learning Ability Vs Reasoning Ability:
One basic problem in the measurement of learning is that some tasks permit learning transfer such that, through whatever mechanism, past learning makes related learning on the same subject easier in the future, and the new learning just amounts to a restructuring of old knowledge. Gains should thus be correlated with pre-training test performance, and so if these sorts of phenomena are considered to be a genuine type of learning, then any sort of learning measure which controls for pre-test performance would be throwing away some genuine variance in learning performance. This is worth careful consideration, as there’s no hard measurement distinction between “learning” and the philosophical categories of "reasoning ability” and “learning transfer” unless we are to define “learning” in such a way as to be limited to only being a memory phenomenon which has only to do with the initial memory encoding process and which is measured by things like habituation or rote learning. Reasoning ability then would be an important enabler of learning as one must first be able to comprehend a concept before they can memorize it.
If on the other hand, we are to insist upon learning being defined to be more meaningful than rote-learning, then the WAIS-IV has learning well-covered. For one, it has the “Perceptual Reasoning” group factor containing things like block design, visual puzzles, and matrix reasoning problems similar to the ones used for the Raven’s test. Next, it has a “Working Memory” group factor (similar to random access memory on a computer) containing tests such as digit span or symbol span where progressively longer series of numbers are read aloud to you and you have to say them back. After that, we have the “Memory” group factor consisting of various delayed recall tasks. Finally, we have the “Verbal Comprehension” group factor where you read a passage and then answer questions about it. In the WAIS-IV standardization sample “Perceptual Reasoning” loads on g at +0.82, “Working Memory” loads on g at +0.94, “Memory” loads on g at +0.77, and “Verbal Comprehension” loads on g at +0.76 [46]:
Reading comprehension, of course, is a robust correlate of intelligence. An n > 370,000 k = 680 meta-analysis [47] found fluid intelligence to be more related to reading comprehension (r = +0.37, 95% confidence interval = [0.35, 0.39]) than to "code skills" such as "word reading" (r = +0.29, 95%CI = [0.27, 0.31]). Something to note: this meta-analysis assessed the correlation of reading comprehension with fluid intelligence rather than the third-order g, which likely slightly attenuates the association. Although contaminated with things like vocabulary, reading comprehension is, in a sense, an operationalization of learning ability because it assesses a person’s ability to comprehend a new text after having been given time to read (or learn) and think about it.
Working memory as well, of course, is also consistently found to have rather-large g-loadings [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, & 59]. While not denoted as being a “reasoning” ability, its involvement in reasoning should be obvious. It’s hard to synthesize a bunch of information through reasoning if you can’t even concurrently keep grasp of all of it on the most basic of levels, and if you can’t reason out the relations between all the information, then it’s hard to encode those relations into your memory. Working memory in fact partially mediates the relationships between IQ, learning ability, and memory retrieval speed, but cannot fully account for the relationship between the two and IQ [65].
Finally, the “Memory” factor, assessing various forms of delayed recall, while contaminated with individual differences in their tendency to forget information, requires individuals to first encode the information into their memories for a while before they have the chance to forget it.
Work Training:
A majority of the best evidence on IQ and learning comes from military studies and from studies of skill acquisition in job training or educational outcomes because these are the areas of life where there’s enough demand for training to solve the problem of training being too much of a time investment for it to be possible to procure a large amount of data. For example, a k = 17, n = 6,713 meta-analysis [60] found intelligence to correlate at +0.38 with skill acquisition in work training. Similarly, a k = 5, n = 1,700 meta-analysis [61] found intelligence to correlate at +0.35 with degree attainment speed in graduate school.
In one n = 78,049 USAF study [62] of how ASVAB test performance relates to training success across 28 different jobs in 89 different technical schools, correcting for range restriction, g correlated with training success at r = +0.7648 on mean (R^2 = 0.5849) while predictive validity only increased to being r = +0.7793 on mean (R^2 = 0.6073) when including non-g residuals in the regression. The difference in validity between the two for predicting training performance (r = +0.7793 VS r = +0.7648), although statistically significant, was practically minuscule, and may be slightly inflated by failure to correct for multiple-testing effects where the g + s models had more independent variables than the g models, and thus had a bit more opportunity to be over-fitted.
One might imagine that different jobs may differ in the degree to which training performance can be predicted from non-g residuals, and in which portions of the non-g residuals predict their respective performance. Alas, in the study [62], when regression equations were allowed to vary on a per-job basis, the non-g portions of the ASVAB variance had an average predictive validity of +.02, the highest non-g validity
for any of the eighty-nine jobs was +.10, the minuscule degrees of non-g validity didn’t even reach statistical significance for one-third of the jobs, and the overall relationship between g and training success did not meaningfully differ between jobs.
In a second n = 78,041 USAF study [63] of how ASVAB test performance relates to training performance across 82 different jobs, g correlated with training performance at r = +0.60812 for the average job (see Table 4, Model 2; g allowed to predict job performance differently depending on the job). When including non-g residuals and allowing different jobs to be predicted differently, predictive validity only increased to r = +0.62831 (see Table 4, Model 2). Note again that the difference in validity between the two for predicting training performance (r = +0.60812 VS r = +0.62831), although statistically significant, was practically minuscule, and may be slightly inflated by failure to correct for multiple-testing effects where the g + s models had more independent variables than the g models, and thus had a bit more opportunity to be over-fitted.
Finally, with a combined sample of 90,548 across 150 different technical training schools from three of these USAF studies [35], Jensen found that when correcting for range restriction and subtest reliabilities, the ASVAB subtests’ g-loadings correlated at r = +0.95 with the degree to which they predicted training performance. For reference, one author, in their study of a sub-sample of 24,000 of these subjects in training for 37 diverse jobs [64], found that the ASVAB subtests’ g-loadings correlated with the degree to which they predicted training performance at r = +0.75, and found that this correlation increased to r = +0.96 when correcting for subtest reliabilities.
One might reckon that the minuscule predictive validity of the ASVAB’s non-g residuals are due to the ASVAB measuring scarcely anything other than g in spite of the subtests seemingly-incredibly diverse skill and knowledge requirements. Empirically however, this is not the case; in the first USAF study [62] for instance, the first general factor accounted for only 63.9% of common factor variance in the dataset. Even accounting for item specificity and measurement error, that should leave enough reliable group factor variance to achieve meaningful increases in predictive validity, but this just isn’t observed.
4. Memory:
Let’s imagine two people equal in educational opportunity, curiosity, reasoning ability, and learning speed. Over time, the more forgetful one will tend to be the one who ends up lower in general knowledge. To what degree then does IQ assess individual differences in memory?
In addition to his two famous IQ tests (WAIS, The Wechsler Adult Intelligence Scales; & WISC, the Wechsler Intelligence Scales For Children) David Wechsler actually also developed a memory test battery, the Weschler Memory Scales (WMS). In a joint CFA study of the WAIS-IV and the WMS-IV scales done with data from the co-norming sample [46], the various delayed recall subtests formed a “Memory” factor (not to be confused with “Working Memory”) which loaded on g at +0.77:
Sauce:
Schroeders, U., Schipolowski, S., & Wilhelm, O. (2015). Age-related changes in the mean and covariance structure of fluid and crystallized intelligence in childhood and adolescence. Intelligence, 48, 15-29. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2014.10.006
Kan, K. J., Wicherts, J. M., Dolan, C. V., & van der Maas, H. L. (2013). On the nature and nurture of intelligence and specific cognitive abilities: The more heritable, the more culture dependent. Psychological science, 24(12), 2420-2428. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/0956797613493292
Kan, K. J. (2012). The nature of nurture: the role of gene-environment interplay in the development of intelligence (p. 134). Universiteit van Amsterdam. Retrieved from https://pure.uva.nl/ws/files/1689258/101363_thesis.pdf
Ritchie, S. J., & Tucker-Drob, E. M. (2018). How much does education improve intelligence? A meta-analysis. Psychological science, 29(8), 1358-1369. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/0956797618774253
Protzko, J. (2015). The environment in raising early intelligence: A meta-analysis of the fadeout effect. Intelligence, 53, 202-210. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2015.10.006
Strenze, T. (2007). Intelligence and socioeconomic success: A meta-analytic review of longitudinal research. Intelligence, 35(5), 401-426. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2006.09.004
Te Nijenhuis, J., van der Boor, E., Choi, Y. Y., & Lee, K. (2019). Do schooling gains yield anomalous Jensen effects? A reply to Flynn (2019) including a meta-analysis. Journal of Biosocial Science, 51(6), 917-919. Retrieved from https://sci-hub.ru/https://doi.org/10.1017/S002193201900021X
Sala, G., & Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends in cognitive sciences, 23(1), 9-20. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.tics.2018.10.004
te Nijenhuis, J., van Vianen, A. E., & van der Flier, H. (2007). Score gains on g-loaded tests: No g. Intelligence, 35(3), 283-300. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2006.07.006
Te Nijenhuis, J., Voskuijl, O. F., & Schijve, N. B. (2001). Practice and coaching on IQ tests: Quite a lot of g. International Journal of Selection and Assessment, 9(4), 302-308. Retrieved from https://sci-hub.ru/https://doi.org/10.1111/1468-2389.00182
Ritchie, S. J., Bates, T. C., & Deary, I. J. (2015). Is education associated with improvements in general cognitive ability, or in specific skills?. Developmental psychology, 51(5), 573. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/a0038981
Lasker, J., & Kirkegaard, E. O. W. (2022). The Generality of Educational Effects on Cognitive Ability: A Replication. Retrieved from https://www.researchgate.net/profile/Jordan-Lasker/publication/360296514_The_Generality_of_Educational_Effects_on_Cognitive_Ability_A_Replication/links/6283f55f2ecfa61d33095946/The-Generality-of-Educational-Effects-on-Cognitive-Ability-A-Replication.pdf
Ritchie, S. J., Bates, T. C., Der, G., Starr, J. M., & Deary, I. J. (2013). Education is associated with higher later life IQ scores, but not with faster cognitive processing speed. Psychology and aging, 28(2), 515. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/a0030820
Der, G., & Deary, I. J. (2017). The relationship between intelligence and reaction time varies with age: Results from three representative narrow-age age cohorts at 30, 50 and 69 years. Intelligence, 64, 89-97. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2017.08.001
Finn, A. S., Kraft, M. A., West, M. R., Leonard, J. A., Bish, C. E., Martin, R. E., ... & Gabrieli, J. D. (2014). Cognitive skills, student achievement tests, and schools. Psychological science, 25(3), 736-744. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/0956797613516008
Watkins, M. W., Lei, P. W., & Canivez, G. L. (2007). Psychometric intelligence and achievement: A cross-lagged panel analysis. Intelligence, 35(1), 59-68. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2006.04.005
Watkins, M. W., & Styck, K. M. (2017). A cross-lagged panel analysis of psychometric intelligence and achievement in reading and math. Journal of Intelligence, 5(3), 31. Retrieved from https://sci-hub.ru/https://doi.org/10.3390/jintelligence5030031
Peng, P., Wang, T., Wang, C., & Lin, X. (2019). A meta-analysis on the relation between fluid intelligence and reading/mathematics: Effects of tasks, age, and social economics status. Psychological Bulletin, 145(2), 189. Retrieved from https://ia902508.us.archive.org/21/items/2019-iq-reading-meta/2019-01878-003.pdf
Holdnack, J. A., Zhou, X., Larrabee, G. J., Millis, S. R., & Salthouse, T. A. (2011). Confirmatory factor analysis of the WAIS-IV/WMS-IV. Assessment, 18(2), 178-191. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/1073191110393106
Wolf, P., Gutmann, B., Puma, M., Kisida, B., Rizzo, L., Eissa, N., & Carr, M. (2010). Evaluation of the DC Opportunity Scholarship Program: Final Report. NCEE 2010-4018. National Center for Education Evaluation and Regional Assistance. Retrieved from https://ies.ed.gov/ncee/pubs/20104018/pdf/20104018.pdf
Sanbonmatsu, L., Kling, J. R., Duncan, G. J., & Brooks-Gunn, J. (2006). Neighborhoods and academic achievement results from the Moving to Opportunity experiment. Journal of Human resources, 41(4), 649-691. Retrieved from https://www.nber.org/system/files/working_papers/w11909/w11909.pdf
Kaufman, S. B., Reynolds, M. R., Liu, X., Kaufman, A. S., & McGrew, K. S. (2012). Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests. Intelligence, 40(2), 123-138. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2012.01.009
Ackerman, P. L., & Heggestad, E. D. (1997). Intelligence, personality, and interests: evidence for overlapping traits. Psychological bulletin, 121(2), 219. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0033-2909.121.2.219
Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of educational research, 58(1), 47-77. Retrieved from https://sci-hub.ru/https://doi.org/10.3102/00346543058001047
Moutafi, J., Furnham, A., & Crump, J. (2006). What facets of openness and conscientiousness predict fluid intelligence score?. Learning and Individual Differences, 16(1), 31-42. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.lindif.2005.06.003
Fleischhauer, M., Enge, S., Brocke, B., Ullrich, J., Strobel, A., & Strobel, A. (2010). Same or different? Clarifying the relationship of need for cognition to personality and intelligence. Personality and Social Psychology Bulletin, 36(1), 82-96. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/0146167209351886
Hill, B. D., Foster, J. D., Elliott, E. M., Shelton, J. T., McCain, J., & Gouvier, W. D. (2013). Need for cognition is related to higher general intelligence, fluid intelligence, and crystallized intelligence, but not working memory. Journal of Research in Personality, 47(1), 22-25. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.jrp.2012.11.001
Furnham, A., & Chamorro-Premuzic, T. (2006). Personality, intelligence and general knowledge. Learning and Individual Differences, 16(1), 79-90. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.lindif.2005.07.002
Chamorro-Premuzic, T., Furnham, A., & Ackerman, P. L. (2006). Ability and personality correlates of general knowledge. Personality and Individual Differences, 41(3), 419-429. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.paid.2005.11.036
Furnham, A., Christopher, A. N., Garwood, J., & Martin, G. N. (2007). Approaches to learning and the acquisition of general knowledge. Personality and Individual Differences, 43(6), 1563-1571. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.paid.2007.04.013
Furnham, A., Swami, V., Arteche, A., & Chamorro‐Premuzic, T. (2008). Cognitive ability, learning approaches and personality correlates of general knowledge. Educational Psychology, 28(4), 427-437. Retrieved from https://sci-hub.ru/https://doi.org/10.1080/01443410701727376
Harlaar, N., Deater‐Deckard, K., Thompson, L. A., DeThorne, L. S., & Petrill, S. A. (2011). Associations between reading achievement and independent reading in early elementary school: A genetically informative cross‐lagged study. Child Development, 82(6), 2123-2137. Retrieved from https://sci-hub.ru/https://doi.org/10.1111/j.1467-8624.2011.01658.x
Stuebing, K. K., Barth, A. E., Molfese, P. J., Weiss, B., & Fletcher, J. M. (2009). IQ is not strongly related to response to reading instruction: A meta-analytic interpretation. Exceptional children, 76(1), 31-51. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/001440290907600102
Stuebing, K. K., Barth, A. E., Trahan, L. H., Reddy, R. R., Miciak, J., & Fletcher, J. M. (2015). Are child cognitive characteristics strong predictors of responses to intervention? A meta-analysis. Review of educational research, 85(3), 395-429. Retrieved from https://sci-hub.ru/https://doi.org/10.3102/0034654314555996
Jensen, A. R. (1998) The g factor: The science of mental ability. Westport, CT: Praeger, Vol. 648. Retrieved from https://emilkirkegaard.dk/en/wp-content/uploads/The-g-factor-the-science-of-mental-ability-Arthur-R.-Jensen.pdf
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies (No. 1). Cambridge University Press. Retrieved from https://b-ok.cc/book/850847/764ee8
Ackerman, P. L. (1986). Individual differences in information processing: An investigation of intellectual abilities and task performance during practice. Intelligence, 10(2), 101-139. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/0160-2896(86)90010-3
Ackerman, P. L. (1987). Individual differences in skill learning: An integration of psychometric and information processing perspectives. Psychological bulletin, 102(1), 3. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0033-2909.102.1.3
Estes, W. K. (1982). “Learning, memory, and intelligence.” In Handbook of human intelligence, edited by R.J. Sternberg. Cambridge: Cambridge University Press. Retrieved from https://b-ok.cc/book/1105960/d80846
Kyllonen, P.C. (1986). Theory based cognitive assessment (AFHRL-TP-85330). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory. Retrieved from https://apps.dtic.mil/sti/pdfs/ADA164083.pdf
Estes, W. K. (1970). Learning theory and mental development. New York: Academic Press. Retrieved from http://125.22.75.155:8080/view/web/viewer.html?file=/bitstream/123456789/4569/3/Learning%20Theory%20and%20Mental%20Development.pdf
Estes, W. K. (1974). Learning theory and intelligence. American Psychologist, 29(10), 740. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/h0037458
Estes, W. K. (1981). Intelligence and learning. in Intelligence and learning (1981) edited by Humphreys, L. G., Friedman, M. P., Das, J. P., & O'Connor, N. New York: Plenum. Retrieved from https://b-ok.cc/book/2297462/bcf683
Jensen, A. R. (1989). The relationship between learning and intelligence. Learning and Individual Differences, 1(1), 37–62. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/1041-6080(89)90009-5
Jensen, A. R. (1978). The nature of intelligence and its relation to learning. Melbourne Studies in Education, 20(1), 107–133. Retrieved from https://sci-hub.ru/https://doi.org/10.1080/17508487809556119
Holdnack, J. A., Zhou, X., Larrabee, G. J., Millis, S. R., & Salthouse, T. A. (2011). Confirmatory factor analysis of the WAIS-IV/WMS-IV. Assessment, 18(2), 178-191. Retrieved from https://sci-hub.ru/https://doi.org/10.1177/1073191110393106
Peng, P., Wang, T., Wang, C., & Lin, X. (2019). A meta-analysis on the relation between fluid intelligence and reading/mathematics: Effects of tasks, age, and social economics status. Psychological Bulletin, 145(2), 189. Retrieved from https://ia902508.us.archive.org/21/items/2019-iq-reading-meta/2019-01878-003.pdf
Engelhardt, L. E., Mann, F. D., Briley, D. A., Church, J. A., Harden, K. P., & Tucker-Drob, E. M. (2016). Strong genetic overlap between executive functions and intelligence. Journal of Experimental Psychology: General, 145(9), 1141. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/xge0000195
Kane, M. J., Hambrick, D. Z., & Conway, A. R. (2005). Working memory capacity and fluid intelligence are strongly related constructs: comment on Ackerman, Beier, and Boyle (2005). Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0033-2909.131.1.66
Oberauer, K., Schulze, R., Wilhelm, O., & Süß, H. M. (2005). Working memory and intelligence--their correlation and their relation: comment on Ackerman, Beier, and Boyle (2005). Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0033-2909.131.1.61
Ackerman, P. L., Beier, M. E., & Boyle, M. O. (2005). Working memory and intelligence: The same or different constructs?. Psychological bulletin, 131(1), 30. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0033-2909.131.1.30
Colom, R., Abad, F. J., Rebollo, I., & Shih, P. C. (2005). Memory span and general intelligence: A latent-variable approach. Intelligence, 33(6), 623-642. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2005.05.006
Gignac, G. E. (2014). Fluid intelligence shares closer to 60% of its variance with working memory capacity and is a better indicator of general intelligence. Intelligence, 47, 122-133. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2014.09.004
Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32(3), 277-296. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2003.12.002
Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little more than) working-memory capacity?!. Intelligence, 14(4), 389-433. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/S0160-2896(05)80012-1
Conway, A. R., Kane, M. J., & Engle, R. W. (2003). Working memory capacity and its relation to general intelligence. Trends in cognitive sciences, 7(12), 547-552. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.tics.2003.10.005
Colom, R., Flores-Mendoza, C., Quiroga, M. Á., & Privado, J. (2005). Working memory and general intelligence: The role of short-term storage. Personality and Individual Differences, 39(5), 1005-1014. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.paid.2005.03.020
Wongupparaj, P., Sumich, A., Wickens, M., Kumari, V., & Morris, R. G. (2018). Individual differences in working memory and general intelligence indexed by P200 and P300: A latent variable model. Biological psychology, 139, 96-105. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.biopsycho.2018.10.009
Colom, R., Flores-Mendoza, C., & Rebollo, I. (2003). Working memory and intelligence. Personality and Individual differences, 34(1), 33-39. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/S0191-8869(02)00023-5
Colquitt, J. A., LePine, J. A., & Noe, R. A. (2000). Toward an integrative theory of training motivation: a meta-analytic path analysis of 20 years of research. Journal of applied psychology, 85(5), 678. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0021-9010.85.5.678
Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity, and job performance: Can one construct predict them all?. Journal of personality and social psychology, 86(1), 148. Retrieved from https://sci-hub.ru/https://doi.org/10.1037/0022-3514.86.1.148
Ree, M. J., & Earles, J. A. (1990). Differential validity of a differential aptitude test. AIR FORCE HUMAN RESOURCES LAB BROOKS AFB TX. Retrieved from https://apps.dtic.mil/sti/pdfs/ADA222190.pdf
Ree, M. J., & Earles, J. A. (1991). Predicting training success: Not much more than g. Personnel psychology, 44(2), 321-332. Retrieved from https://sci-hub.ru/https://doi.org/10.1111/j.1744-6570.1991.tb00961.x
Ree, M. J., & Earles, J. A. (1992). Intelligence is the best predictor of job performance. Current directions in psychological science, 1(3), 86-89. Retrieved from https://sci-hub.ru/https://doi.org/10.1111/1467-8721.ep10768746
Wang, T., Ren, X., & Schweizer, K. (2017). Learning and retrieval processes predict fluid intelligence over and above working memory. Intelligence, 61, 29-36. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2016.12.005
Kaufman, A. S. (2018). Contemporary intellectual assessment: Theories, tests, and issues. Guilford Publications. Retrieved from https://not-equal.org/content/pdf/misc/Kaufman2018.pdf
Pässler, K., Beinicke, A., & Hell, B. (2015). Interests and intelligence: A meta-analysis. Intelligence, 50, 30-51. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2019.101382
Pässler, K., Beinicke, A., & Hell, B. (2015). Interests and intelligence: A meta-analysis. Intelligence, 50, 30-51. Retrieved from https://sci-hub.ru/https://doi.org/10.1016/j.intell.2015.02.001
Lucas, R. E. (2022). It's time to abandon the cross-lagged panel model. Retrieved from https://psyarxiv.com/pkec7/
May you add tts audio to all your posts
>the “Verbal Comprehension” group factor where you read a passage and then answer questions about it.
That’s not how VC is measured at all.