Nagivation:
‒Sampling Theory
‒Mutualism
—Is IQ Cultural?
—Age De-Differentiation
—Matthew Effects
—SES & Learning
—The FLynn Effect
—Longitudinal & Experimental Evidence
‒Conclusion
‒References
This post is a response to Sasha Gusev’s inept reading [1] of the factor-analytic literature.
To begin, the strength of the g factor isn't that it explains high proportions of the variance in a test battery. Rather, what's typically replicated [2, pp.79 & 81[ is that it explains more variance than all other factors put together. Suppose you made a 10-question vocabulary test and then randomly split it into two 5-question vocabulary tests and found that the two vocabulary tests correlated at only .3. Would you fault IQ if it explained less than 50% of the variance in either test? No, you'd congratulate it if it completely explained the correlation between vocabulary tests; IQ explained performance on the tests in proportion to the extent that the tests were real to begin with. Of the common factor variance in a test battery, the non-g residuals you're left to play with are a pittance.
Sampling Theory:
>"Multiple theoretical accounts of the positive manifold can produce the same model fit."
Sure, multiple empirically-falsified ones. Let's deal with sampling theory first. Suppose we had 4 subtests, subtests 1, 2, 3, & 4, which suffer maximally from such sampling problems such that each are evenly the result of 2 out of 4 uncorrelated mental abilities: ability a, b, c, & d; test 1 is c+a, test 2 is a+d, test 3 is c+b, and test 4 is b+d:
If we took any two tests (e.g. test 2 and test 4) and looked at what they had in common, a factor common to the two would be a purer expression of d while by contrast, a and b would become specificities which cancel out. That is, a factor solution which maximizes simple structure should minimize such sampling problems.
You may be jumping for joy hearing praise for the simple structure concept, but here's the thing. A variable, otherwise understood as a vector, can be understood as being a line in geometric space, and given the normalization of those variables, their correlation is equal to the cosine of the angle between the vectors:
As such, the correlations among groups of variables can be understood as the angles between sets of intersecting lines in geometric space, with a correlation of zero between an angle of 90 degrees and correlation of 1 being an angle of zero degrees. The consequence of this is that for a positive manifold, a proper simple structure solution shouldn't assume orthogonality. Within a two-dimensional principal component space, the angles between variables (i.e. between blue dots / dotted black lines) will look less like image A and more like image B:
If the dots from image B were forced to use the factors from image A, then the lines that were originally closer to green than to purple will continue to be so, but we'd introduce two problems, the first being that the variability in factor loadings that variables have across factors would be artificially limited; where an oblique solution is most appropriate, the variables that rightly belong on an oblique factor won’t correlate as highly with the factor’s orthogonal counterpart. The second ironically enough happens as a consequence of factor rotation being done with the goal of maximizing the variability in factor loadings that variables have, this being that a general factor will always be partitioned into group factors regardless of how arbitrarily-highly correlated the variables of any two group factors happen to be. In principle, a solution with arbitrarily-high factor-factor correlations should be accepted so long as it sufficiently increases the factor-variable correlations.
This allows us to test for an interesting property, one which is not merely the consequence of a positive manifold. Suppose two models were fitted, one which attempts to find the best possible oblique simple structure, and another which has the same goal but with an additional restriction that the factor-factor correlations must be perfectly reproduced by a single general factor. If the two models had equivalent fit, this would suggest there's something real about the general factor and that it's not merely the consequence of a positive manifold; this would mean that a hierarchical model would achieve the univariate equivalent of simple structure. As it happens, this has been tested multiple times over [3], and the fit differences are always insignificant when comparing hierarchical models to the oblique models that they're derived from. Especially impressive is that this is true of the Woodcock-Johnson test battery, whose content is based on Carol's comprehensive taxonomy of cognitive abilities.
As it turns out, when abilities a, b, c, and d are expressed with minimal such sampling problems, they all share exactly the same common properties rather than being independent. Supposing that something like brain size were causal for performance on every test while something like neuron health were also causal for performance on every test despite brain size and neuron efficiency being uncorrelated (i.e. despite brain size and neuron efficiency failing to comprise any "neuro-g"): It would have to be that the two traits vary by the same proportions in how causal they are for different test performances, leaving the unidimensionality of intelligence sensible at the psychological level of analysis even though the different performance-causing traits aren't correlated with each other. What matters is whether different neural traits have different causal effects on different abilities, not whether different neural traits are uncorrelated. This is repeatedly shown, e.g., when looking at genetic correlations between different abilities (both twin-based and molecular-based genetic correlations), all despite genetic variation very clearly being extremely multidimensional [4, 5, 6, 7, 8, 9, 10, & 11]. This should also be a case against Mutualism! Almost all SNPs impact item responses in accordance with a factor model of traits rather than the patterns predicted by network models. Traits are how genes get to act on the world!
The insignificant differences in model fit are in spite of it being mathematically impossible for a hierarchical model to fit better than the oblique model it's derived from. An oblique model has total freedom to set the factor-factor correlations however it pleases while a hierarchical model has no such freedom, thereby showing said freedom to be of no advantage. In comparisons between the two then, the best a hierarchical model can hope for is equality, which is what we observe. By contrast, if hierarchical models are abandoned, g theory does have one theoretical advantage. Suppose we had two vocabulary tests again, but this time suppose one were comprised of questions about analogies between words. One might regard the analogies test as requiring more higher-order thought, even if the two vocabulary tests are more similar than what's implied by their g-loadings. However, hierarchical models can't exploit this due to an artificial proportionality restriction, where variables can only load on the general factor in proportion to their loading on their group factor. When this proportionality restriction is relaxed in bifactor models, the bifactor models routinely fit meaningfully better than the hierarchical models, even with fit indices which penalize model complexity [see 12 & 13].
Spearman argued from the earliest days that his test of vanishing tetrads could trivially be made to fail if the same test was entered into a correlation matrix multiple times over, but in his day, the matter was unresolvable due to the subjectivity of whether two different tests share the same content. Now however, simple structure seems to be a convenient way of determining objectively which tests should be grouped together. Instead of multifactor models fitting slightly better due to residual correlations between specificities, there's a strong case that Spearman was right about failures being due to multiple tests loading on the same specificities [see 3].
Mutualism:
The meat of Gusev's post here is his case for mutualism, and his case rests on confusion over the predictions of different theories as well as the state of the evidence.
This paywall will fall on Christmas day, 2024:
Keep reading with a 7-day free trial
Subscribe to Half-Baked Thoughts to keep reading this post and get 7 days of free access to the full post archives.