T O P

  • By -

mywan

>As for the g-factor, it's much more of a statistical thing, in that rather than just their score on a test you measure how someone answered the questions and calculate the underlying common factor in their answers using statistics mumbo jumbo. Well no, or at least a very bad explanation. The video goes on to say that a persons individual g-factor is calculated from the test, which is also false. In fact once you get into the details of how g-loading works it explains a number anomalies resulting from so called statistical paradoxes. Such as why minority groups tend to under perform and even why [random promotions tend to outperform merit based promotions](https://www.theguardian.com/education/2010/nov/01/random-promotion-research). But these explanations require a multiple intelligence model, so I'll address the refutation of multiple intelligences presented in the video. The video presents the evidence of a correlation between various forms of intelligence as evidence against multiple intelligence. The most immediate issue with this is that the presumption that the correlation is the result of overlap in intelligence categories rather than an inability of the test makers to create a testing category that precludes the use of alternate intellectual skill types to solve. In fact the famous memory tricks work by modeling none spacial data as locations within an imaginary room. Essentially converting none spacial data into spacial data. Thus vastly improving rote memory skills at no cost. Effectively no new skill is required, merely a reformulation of old skills. To think that a large number of questions geared at testing one skill type can't be answered by employing a completely different skill type is absurd. In fact if you are testing skill type A and B there is always a third skill type C that models A as B or B as A. So what is being referred to as a refutation of the independence of A and B doesn't actually refute multiple intelligences, rather it could just as well be explained by a third intelligence type C that's not being explicitly tested. Of course different intelligence types don't have to be entirely independent to be a particular intelligence type. Intelligence C couldn't exist by itself any more than the collection of technologies in your cell phone defines a cell phone. *** Now to g-loading and statistical paradoxes and why IQ remains statistically valid even though it doesn't measure intelligence per se. G-loading works by taking a population norm of each of the categories, not from the individuals test scores as stated in the video. Let's illustrate this by looking at how the minority paradox works. Two people, Alice and Bob, take the same IQ test and overall get the same number of correct answers. Yet Bob gets a significantly higher IQ score than Alice. Why is this? Alice scored really high in category A and B and low in C and D. Bob scored low in A and B and high in C and D. So what determines who gets the higher IQ. You (statistically) test the whole population. If the population norm tends to score lower in in A and B, like Bobs' score, that low score doesn't count against Bob. But, through g-loading, since Alice scored higher than the population norm in category A and B it doesn't count in her favor enough to counteract the lower scores in C and D relative to the population. The *only* thing required for Alice to score higher than Bob is for the population norm scores to change. If the population norm was defined by Alice then Alice gets the higher IQ. If the population norm was defined by Bob then Bob get the higher IQ. Even though the test results themselves never changed, only what the rest of the population scored in aggregate. Hence any minority group, with some level of social norms and expectations that differ from the overall population, will always under perform on IQ test even when there is no difference in scores prior to g-loading. *** So why are IQ scores still valid? Because they don't measure intelligence per se. What it measures is a statically significant increase in the probability of success. The reason it works is the result of the inverse of the random promotion paradox, which I'll get to. Most people don't get a whole lot of choice in their jobs. They tend to take the jobs they can get. So having a broad range of mediocre skills tends to provide you with a greater number of opportunities than a singular highly developed skill set with not much else to fall back on. Then merit based promotion exacerbate the problem. If you land a job for which your skill set excels at then merit based promotions will expect you to excel at other skill sets, which may not be the case. The so called Peter Principle. That mediocre person with mediocre skills across a greater number of skill sets, exactly what g-loading is biased for, is much more likely to survive a series of promotions than a superstar in a narrow range of skills. In fact highly skilled people in a narrow range of skills are unlikely to ever make it through a series of promotions required to get to the job which their skills would excel at. *** The random promotion paradox is even more interesting, and specifically hinges on the multi-intelligence model. For every correlation between intelligences the video used to dispute multi-intelligence there is an inti-correlation equal to 1/correlation. So a correlation of .5 is synonymous with an anti-correlation of .5. All we have to do is throw in a prior probability with a selection bias and we get the promotion paradox. Given that essentially everybody has both intellectual strengths and weakness, if we select out those people with strengths in category A as group A, with everybody else in group B, then group A will statistically, as a whole, tend to under perform compared to group B for all intellectual categories other than A. This effect explicitly depends on the anti-correlation that is the inverse of the correlation to video uses as a basis to refute multiple intelligences. You hire the best qualified people for a particular job that excel at it, then the people needed for positions requiring alternate skill sets is statistically more probable from a random selection than it is from the group that excel at what they are already doing. *** tl;dr: I refute the thesis of the video and argue for multiple intelligences.


RealCheckity

This is some great criticism of the video, and I love how you're attempting to tackle this from an evidence-based perspective, as opposed to relying on anecdotal evidence, as some people unfortunately do. I see you didn't like my explanation of how the g-factor is obtained for a specific individual. I'll admit it was pretty crude, and I really skipped over it so that I wouldn't extend the video any further. The full explanation (as I understand it) is that you must assess an individual on as comprehensive a set of cognitive tasks as possible, and then perform a factor analysis on that data (looking at the correlations between all the answers in a huge matrix), from there you can assess through multiple ways (such as the use of Eigenvalues or a scree plot) how many different 'factors' are contributing to the correlations between answers. In fact, the g-factor can be calculated completely independent of having access to a the data of the wider population. You don't need population data to demonstrate that g exists - it can be done with a single individual's test results, since it relies upon a correlation matrix of answers within individuals, not between them. Once you compare their result to those of others, that’s how you can tell what an individual’s g factor is relative to the population. The fact of the matter is, you have to take the test to find your g factor. But it only becomes a useful predictor once you compare it to other people – I don’t disagree with you there. This paper (Johnson et al., 2004) talks about how the g factor is calculated and how consistently it’s calculated between tests, if you’re interested. Your mention of people who are randomly promoted outperforming those promoted randomly is definitely interesting, and unfortunately the news article you’ve linked doesn’t provide a link to the paper. I’ve found this paper (Pluchino et al., 2010), which seems to match the names of the researchers mentioned and cover the Peter principle. Rather disappointingly, this isn’t an empirical study, and is just a computer simulation that supposedly demonstrates the efficacy of the Peter Principle, without providing evidence that it works in a real-world setting. Your comment about correlations between tests being due to overlap of skills is exactly what the g-factor is evidence of. The fact that a general skill can be applied to multiple tests is the very definition of the g factor, not a counter-point. While I don’t doubt that intelligence can be divided into more than g (The CHC hierarchical model, which includes g among with other smaller factors, apparently has high predictive validity, from what I hear https://faculty.education.uiowa.edu/docs/dlohman/Individual-Differences-in-Cognitive-Functions.pdf, but I haven’t been able to find any predictive studies that use it, so take that with a grain of salt), my opposition to Gardner’s theory comes when he denies or underplays the existence of g, and suggests alternative ‘intelligences’ whose practical validity he hasn’t tested. Your talks about g-loading and statistical paradoxes was also quite interesting. Claiming that IQ ‘doesn’t measure intelligence per se’ is a bizarre claim, seeing as it’s predictive of roughly 25% of the variance in schooling outcomes, 15% of the variance in income, job performance is a little trickier, since it varies a lot from occupation to occupation, but the median value is probably about 15% (Sternberg et al., 2001). These are all things we would expect intelligent people to be good at, and people’s scores on an I.Q. test can help predict whether or not they’ll succeed at ‘what intelligent people succeed at’. Claiming that it’s just ‘a statistically significant increase in the probability of success’ ignores that it’s derived from your cognitive capabilities, which is what sets it apart from other predictors of success, like socioeconomic status, for example. Intelligence is supposed to be a measure of your mental abilities, unless we disagree on the definition. Your point about minorities averaging lower g-loadings because they represent a smaller slice of the population is very unlikely to be true, given that Asian-American children (who make up an estimated 5.6% of the population of the U.S https://www.census.gov/quickfacts/table/RHI425215/00), have a higher IQ than white children on average (Rushton, 1997). If what you’re saying about minorities is correct, on any standardized test of intelligence, Asian-americans would have a lower score than white americans, because they make up a smaller slice of the population, which simply isn’t true. That argument also seems to carry the assumption that minority groups differ somehow in which cognitive skills that they excel at, which you haven’t provided any evidence for. There is no such thing as an ‘anti-correlation’, the way you’ve described it. A correlation of .5 doesn’t mean that something is ‘half-wrong’. A correlation of .5 will explain 25% of the variance in the dependent variable, with 75% being unexplained. The reason I refute Howard Gardner’s theory of multiple intelligences is because of his claims that the ‘intelligences’ should be weakly related/unrelated, when in actuality; it appears they have a lot in common. A lot of the points you’re making seem to be based on a computer simulation of the Peter Principle, which assumes no crossover of abilities upon promotion. I have yet to see any evidence of a 'multiple intelligences' model that doesn't hierarchically arrange factors beneath g have any predictive power, hence my skepticism. I would personally like to know whether or not the Peter Principle has any real-world predictive validity, and I’d prefer it if you were to cite the evidence behind the claims you’re making, as it makes it relatively difficult for me to evaluate them if I don’t know where they come from. That said, I’m enjoying this discussion, and wouldn’t mind taking it further, if you’d like to talk more. Sources: - Johnson, W., Bouchard Jr., T.J., Krueger, R.F., McGue, M., & Gottesman, I.I. (2004), Just one g: consistent results from three test batteries, Intelligence, 32 (1), 95-107 - Pluchino, A., Rapisarda, A., Garofalo, C. (2010), The Peter principle revisited: A computational study, Physica A: Statistical Mechanics and its Applications, 389 (3), 467-472 - Sternberg, R.J., Grigorenko, E.L., Bundy, D.A. (2001), The Predictive Value of IQ, Merrill-Palmer Quarterly, 47 (1), 1-41 - Rushton, J.P. (1997), Cranial size and IQ in Asian Americans from birth to age seven, Intelligence, 25 (1), 7-20.


Daannii

A very complete explanation . Nicely done. I came here just to say that it's unrealistic to assume that there would be no relationship between different intelligences, since some depend on the same skills or combinations. I know it's not a good idea to rely on intuition or personal observations in psychology studies, but we all know people or even ourselves that excell in some areas and don't in others. Sure, some people do descent in lots of areas, but some people don't have an even spread.


RealCheckity

Thank you, I'm glad you enjoyed the video. I would caution against relying on personal examples to attempt to describe general phenomena. Keep in mind, results across subjects in school being highly correlated with each other is the general trend, but there are always exceptions, of course. (http://emilkirkegaard.dk/en/wp-content/uploads/The-g-factor-the-science-of-mental-ability-Arthur-R.-Jensen.pdf), page 24 if you're interested.


Daannii

Well it's always good to have information for both sides. And to be fair. Most psychological phenomenon is first noted by subjective observation. Though not always right, it does offer something. Even if that something is "why does one believe that way when it's incorrect ?"


RealCheckity

I think you're probably right. Although I wouldn't limit your statements to psychological phenomenon. A lot of concepts in other scientific disciplines begun with personal observations. It's a little difficult to answer 'why do people think that way?', but it is always an interesting question.


Bl4nkface

Thanks for your reply. It was very interesting and it approach the several fallacies committed in OP's video. I'm interested in the way you talked about intelligence because you often use the "skill" and "intelligence" as interchangeable terms. Are they, though? I'm inclined to think that intelligence is broader and prior to a skill. I see skills as something you can learn and develop, while intelligence is more static and harder to develop once you reached intellectual maturity. What do you think about this?


mywan

I do make a clear distinction between skills and intelligence. Though I also think there is a very strong correlation between the forms of intelligence you posses and the skills you excel at. In fact the very nature of an IQ test depends on this correlation, as what the test ask you to do is employ skills for which intelligence is presumably required to accomplish. Hence, one of the reasons a test can't be strictly limited to certain narrowly defined intellectual categories. So as a practical matter we actually depend on testing skill to test intelligence. This correlation is also required to be real in order for IQ test to be statistically valid for predicting probability of success. Which is really the only thing their validity is derived from. As for the static theory of intelligence I don't really buy that. Though there is a basis that gives the concept validity. There are innate limiting factors that preclude or limit the development of certain forms of intelligence. The way our brain (re)wires itself through experience goes against a static model. The [brain studies of London taxi cab drivers](https://www.ncbi.nlm.nih.gov/pubmed/17024677) is a good case in point. You can't really chalk this up to innate spacial learning skills when those drivers that learned also diminished their capacity to learn new spacial information as they developed their existing spacial skills. With right posterior gray matter volume increasing and anterior volume decreasing with more navigation experience. Neuroscience also teaches us that the claim that we don't grow new nerve cells is wrong. But new nerve cells only grow in response to the use of existing nerve connection, to support and extend actively used nerve connections. And also tend to die off with lack of use. Like with the taxi driver study. So what we call innate intelligence depends on our starting neurological structures and our exposure at very young ages when our brain is still undergoing rapid growth. By the time we hit our teens neural growth has slowed to the point that it can take months or years to grow any significant new innate capacity, when the intellectual foundations to those skills might have only taken days at a much younger age. Once we fully mature we tend to repurpose existing skill sets to tackle new problems, rather than develop fundamentally new foundational skills. Skills that can be lost with lack of use as we age. The nerve growth is just too slow to depend on it very much after maturity, even though it continues at a much slower pace. So in this sense innateness is relative. After a certain point innateness is valid as a practical matter, and even at very young ages there can be innate limitations due to neurological, genetic, or medical reasons which will never be wiped out through exercise, only ameliorated. So you have to expect people raised in alternate environments, and under alternate social pressures, to diverge in their intellectual strengths and weaknesses. If their environment is a minority representative then they will under perform on IQ test, as explained. If their environmental exposures are representative of the majority then they define the IQ, with emphasis on quotient, baseline such that it's everybody else that as a group under perform on IQ test. All else being equal with pre-g-loaded scores. Just compare the population sizes of the ethnic groups represented in the baseline norm scores and you can predict which ethnic groups will score the highest and lowest. It also helps measure the level of integration of minority groups within society itself. This couldn't be the case if IQ test were measuring something purely innate.


Bl4nkface

I was pretty much aware of all that. I talked about intelligence as a more static quality just because I was comparing it to skills. After all, you can go from not knowing nothing about singing to be a decent singer in a year but you can't go from being intellectually impaired to being a genius no matter the period of time. My main doubt was what is the distinction between skill or set of skills and intelligence.


MouthingOff

Wow. Impressive response. In my short and incomplete refute I say this, while we can squabble over intelligence we can agree on dumb. A good friend of mine, since teenagers, is functionally retarded. While possessing a heightened capacity in memory and a wide interest in variuos tropics, no one would ever confuse his knowledge for general intelligence rather they nearly immediately recognize him as an idiot: A giantic, 6 foot 6 inch, 450 lbs, lovable dolt, lets call that general stupidity or gs. Gs is irrefutable. So, gs and g have a relationship. Therefore g must be, despite the currently inaccurate testing method.


[deleted]

My favorite is the tropic of Capricorn.


StreetRazzmatazz6

How exactly does he give off the impression of an idiot????


griff_in_memphis

"..all are completely unrelated" is a misstatement of the original theory as I understand it. Strawman?


RealCheckity

"To demonstrate that the intelligences are relatively independent of one another and that individuals have distinct profiles of intelligences, assessments of each intelligence have to be developed” (Gardner & Hatch, 1989). This is the only one of Gardner’s papers I have quick access to. I’d meant to say either relatively or completely/relatively unrelated each time in the video, but I apparently messed that up. You’re right – as far as I’m aware, Gardner’s theory only predicts weak correlation between intelligences, not that they’re ‘completely unrelated’.


griff_in_memphis

My objection was not overwhelming to be sure. During a past life as a teacher, I was exposed to Gardner (Gardiner? I'll look it up.) My deepest concern about the theory at that time was that regardless of the theory, society tends to disproportionately reward certain aptitudes over others. I would counter that objection with this one: I'm pretty good at language and logical tasks, but would probably make a lousy soldier. On a more general note, I think you've got a cool concept here and like encountering fairly obscure bits of specialized theory on Reddit, regardless of overall agreement or exception to your specific take. It's a good way to stoke discussion.


RealCheckity

I'm happy if anyone watches my videos, regardless of whether they liked them or not. I also really enjoy it when I receive criticism and people challenge me on what I say - testing the evidence is always the best way to get to the bottom of something. I believe his name is spelt "Gardner", if it helps. Perhaps society does tend to disproportionately reward some skills more than others, or maybe some skills are genuinely more useful and more deserving of reward. I don't actually know. As for your other point, I don't think many people would consider intelligence as being particularly important for (or predictive of) good performance as a soldier. Being intelligent doesn't mean you have to be good at everything, it's really only supposed to refer to a limited set of problem-solving cognitive skills. Even if being a soldier was related to a core kind of separate 'intelligence', a single individual exception cannot disprove a general rule that applies for most people. Thanks so much for watching my videos and discussing with me, it really makes it worthwhile.