IQ Rankings by Profession: What 360-Occupation Data Reveals for Employers

That distinction between cognitive height and cognitive shape sits at the center of the largest occupational intelligence study published to date. In 2023, Tobias Wolfram analyzed cognitive ability across 360 professions using data from the UK's Understanding Society longitudinal survey (N=40,000+), producing the most granular map of IQ by profession ranking ever compiled. The findings challenge how employers use, and misuse, cognitive data in hiring.
Key Takeaways
- 360 occupations now have estimated mean IQ scores ranging from ~87 to 114, a total spread of roughly 1.8 standard deviations (Wolfram, 2023, Intelligence)
- Cognitive thresholds matter more than maximums — top 1% earners score slightly lower on cognitive tests than top 2-5% earners (Keuschnigg et al., 2023)
- Ability pattern predicts career domain better than overall IQ — among the top 1%, math-verbal tilt determines STEM vs. humanities career choice (Park, Lubinski & Benbow, 2007)
- The validity debate is unresolved — Sackett et al. found r=.31 (2022) and r=.22 (2023) vs. Schmidt and Hunter's r=.51 (1998), depending on methodology
- 82% of employers now use pre-employment assessments but cognitive tests carry documented adverse impact (d=0.65 Black-White gap) requiring legal mitigation strategies
Whether you are hiring for a high-stakes role or weighing whether your own cognitive profile fits one, this data applies to both sides of the table. Understanding where professions cluster cognitively helps employers set realistic screening thresholds without over-selecting, and helps professionals evaluate whether their cognitive profile genuinely fits a target career's demands. For an interactive look at IQ ranges, salary data, and cognitive domain breakdowns for 170+ individual professions, see our IQ by Profession tool.
What 360 Occupations Actually Reveal
Wolfram's 2023 study in Intelligence used small-area estimation to rank 360 UK Standard Occupational Classifications by estimated mean IQ, derived from a fluid intelligence composite measuring verbal and numerical reasoning. The sample drew from over 40,000 participants in the Understanding Society longitudinal survey, with attenuation correction applied to improve group-level reliability.
The headline finding: professional occupations cluster in a surprisingly narrow band. Physicists and astronomers top the list at an estimated mean IQ of 113.95. Veterinarians sit at 113.01. Air traffic controllers land at 112.60. General medical practitioners and specialist physicians fall in the 112-113 range. The difference between the "smartest" and "tenth-smartest" profession amounts to roughly two IQ points, well within measurement error for individuals.
| Est. Mean IQ | IQ Range (SD) | BLS Median Salary | |
|---|---|---|---|
| Physicists & Astronomers | 113.95 | 106-122 | $166,290 |
| Veterinarians | 113.01 | 105-121 | $125,510 |
| Air Traffic Controllers | 112.60 | 105-120 | $144,580 |
| Medical Practitioners | ~112 | 104-120 | $239,200+ |
| Lawyers | ~110 | 102-118 | $151,160 |
| Financial Managers | ~109 | 101-117 | $161,700 |
| Software Developers | ~108 | 100-116 | $133,080 |
| Civil Engineers | ~107 | 99-115 | $99,590 |
| Registered Nurses | ~100 | 92-108 | $86,070 |
| All Occupations (Mean) | 100 | 85-115 | $67,920 |
Wondering where your cognitive profile lands? Take our assessment to map your ability tilt across verbal, quantitative, and spatial dimensions.
A critical limitation: this is UK data. US occupational classifications, credentialing requirements, and labor market structures differ meaningfully. A UK "solicitor" maps imperfectly onto a US attorney. UK medical training follows a different pathway than US residency programs. The directional patterns (higher cognitive demands in STEM and professional occupations) likely hold across both countries, but the specific numbers should not be treated as direct US benchmarks.
The total spread across all 360 occupations covers roughly 25-28 IQ points, or about 1.7-1.9 standard deviations. That range is narrower than many people assume. Unskilled manual labor and food service occupations cluster in the 87-95 range, not at the far left tail of the distribution. The data paints a picture of occupational sorting that is real but less dramatic than pop-culture narratives suggest. To see how your own score stacks up, compare IQ across 360 professions with our IQ Comparison Tool.

What stands out for employers isn't the ranking itself but the variance within each profession. The standard deviation within most occupations runs 8-12 IQ points. A software development team with a mean IQ of 108 will include individuals scoring 96 and individuals scoring 120. That within-group variation dwarfs the between-group differences that make ranking tables so clickable.
This within-group spread is precisely why composite IQ scores make crude hiring instruments when used as hard cutoffs. The top performer on your engineering team may score lower than the median physicist in Wolfram's data. What separates them isn't a number on a bell curve; it is how their specific cognitive abilities map to the actual demands of their role.
The overlap between adjacent professions is enormous. A nurse at the 75th percentile of her profession's cognitive distribution scores higher than an average software developer. An electrician at the 90th percentile outscores the median lawyer. These overlaps mean that rank-ordering professions by mean IQ, while statistically valid, overstates how cleanly occupations sort by ability. For hiring managers, the practical lesson is that the person sitting across the interview table cannot be reduced to their profession's average.
The Cognitive Floor, Not the Ceiling
The most actionable insight for employers isn't which profession ranks highest. It's that cognitive ability operates as a threshold: a floor below which performance drops sharply, but above which additional IQ points yield diminishing returns.
Gottfredson's foundational 1997 analysis of GATB (General Aptitude Test Battery) data showed that professional jobs cluster above IQ 110-120 on general ability measures. Attorneys land around the 91st percentile (IQ 120-125). Engineers sit at roughly the 88th percentile (IQ ~118). These aren't averages of who succeeds — they are approximate entry floors. Below these thresholds, individuals struggle with the cognitive load the work demands. Above them, factors like domain expertise, personality, and motivation increasingly dominate performance.
The cognitive plateau finding from Keuschnigg et al. (2023) makes this point sharply. Using Swedish conscription data covering 59,000 men, they found that the IQ-income relationship is positive across most of the distribution, until it isn't. Above approximately the top 15-20% of earners (roughly $60K equivalent), average IQ plateaus at about +1 SD (IQ ~115). The striking detail: top 1% earners actually score slightly lower on cognitive tests than those in the top 2-5%.
That counterintuitive finding suggests that at the highest income levels, social capital, networks, risk tolerance, and opportunity structures matter more than raw cognitive horsepower. For employers, the implication is clear: setting IQ cutoffs at 130 or 140 doesn't identify better hires. It screens out candidates whose non-cognitive strengths, the very traits that drive leadership and innovation, might be their greatest assets.
A parallel finding from medicine reinforces this: the Cambridge Quarterly of Healthcare Ethics (2023) found no evidence that surplus IQ beyond the entry threshold correlates with higher-quality patient care. McManus et al. (2003) tracked UK physicians over 20 years and confirmed that intelligence at entry predicts career grade and postgraduate qualifications — but the relationship is about clearing the floor rather than reaching the ceiling.
Why Cognitive Shape Matters More Than Height

This is where IQ Career Lab's core insight enters the picture. Two candidates with identical composite IQ scores of 120 can have radically different ability tilts — the pattern of relative strengths across cognitive sub-domains. One might excel in spatial and quantitative reasoning while scoring average on verbal tasks. The other might show the inverse. Same overall score. Completely different cognitive architectures.
The Study of Mathematically Precocious Youth (SMPY) produced some of the strongest evidence for why shape matters. Park, Lubinski, and Benbow (2007) found that among individuals in the top 1% of cognitive ability, math-verbal tilt predicted career domain — STEM versus humanities — more reliably than overall ability level. A follow-up by Wai, Lubinski, and Benbow (2009) demonstrated that spatial ability in adolescence predicted STEM entry and innovation 11 years later, above and beyond verbal and quantitative scores.
Robertson et al. (2010) extended this further: even within the top 1%, specific cognitive ability patterns predicted career choice, performance, and persistence. The practical translation is stark. An employer hiring data scientists should weight quantitative and pattern recognition scores more heavily than verbal fluency. A law firm should do the opposite. Using a single composite score treats these fundamentally different cognitive profiles as interchangeable.
Nye et al. (2022) quantified this in a workplace context, finding that narrow abilities — spatial reasoning, processing speed, reaction time — add incremental validity over general mental ability for both task performance and training success. Verbal ability predicts law performance. Numerical ability predicts finance performance. Spatial ability predicts engineering and surgical performance. The composite masks exactly the information employers need most.
The Validity Debate: What Employers Should Actually Believe
For over two decades, the field treated one number as settled: r=.51. That was Schmidt and Hunter's (1998) meta-analytic estimate of the correlation between general cognitive ability (GCA) and job performance, published in Psychological Bulletin. It became the statistical foundation for cognitive testing in hiring across industries worldwide.
Then Sackett, Zhang, Berry, and Lievens challenged it. Their 2022 re-analysis, drawing on 153 samples and 40,740 workers, found the observed correlation was r=.16, with a corrected estimate of r=.31. A follow-up paper by Sackett et al. in 2023 applied more conservative range restriction corrections and arrived at r=.22. The gap between Schmidt and Hunter's .51 and Sackett's .22 is not academic trivia: it fundamentally changes the cost-benefit calculation of cognitive screening.
The Validity Debate: Key Questions for Employers
The honest answer for employers: the validity debate is unresolved. Both estimates have methodological strengths and limitations. What's not debated is that combining cognitive assessment with structured interviews and work samples produces substantially better hiring outcomes than any single measure alone. Schmidt and Hunter's finding that GCA plus a structured interview reaches r=.63 has held up across replications.
Who's Using This Data and How

The shift from credential-based to assessment-based hiring is accelerating. 82% of businesses now use pre-employment assessments (SHRM, 2024). 81% of companies report using skills-based hiring, up from 56% in 2022 (TestGorilla, 2024). GPA screening dropped from 73% to 42% since 2019 (NACE, 2026). And 40% of companies have removed degree requirements entirely.
But a Harvard and Burning Glass (2024) analysis adds an important caveat: only 0.14% of actual hires were affected by degree-removal policies. The gap between stated adoption and practice is massive. What fills that gap increasingly is cognitive screening.
Unilever provides the most documented case study. Processing 1.8 million applications per year, they replaced CV screening with Pymetrics cognitive games. Results: time-to-hire reduced 90% (from four months to four weeks), over 1 million pounds in savings, and a 16% improvement in diversity. The diversity gain matters — it suggests cognitive assessment, done right, can reduce bias compared to resume screening.
Google has publicly disclosed that cognitive ability is their highest-weighted hiring criterion, validated through internal data (Laszlo Bock). Goldman Sachs processes 315,126 applications for 2,700 intern spots (<1% acceptance rate), with all applicants completing cognitive aptitude screening. McKinsey receives roughly one million resumes annually and replaced their traditional Problem Solving Test with a gamified "Solve" assessment measuring systems thinking, decision quality, and adaptive reasoning.
The financial stakes justify the investment. The Department of Labor estimates a bad hire costs up to 30% of first-year earnings. For specialized tech roles, that figure reaches 150-200% of annual salary. A single physician departure costs a health system $750,000 to $1.8 million (Premier Inc., 2024), against average physician revenue generation of $3.8 million annually (AMN Healthcare, 2023).
McKinsey's 2024 data puts it bluntly: top performers in high-complexity roles are 800% more productive than average performers. Companies using structured assessments see 24% more employees exceeding goals (Aberdeen) and 36% reduction in turnover (SHRM, 2024). The pre-employment testing market hit $2.5 billion in 2024 with an 8.8% CAGR projected through 2032.
The Adverse Impact Question Employers Cannot Ignore

No honest treatment of cognitive testing in hiring can skip this section. Te Nijenhuis et al. (2024) meta-analyzed over 2 million UK observations and found Black-White GMA test score gaps of d=0.65 — a meaningful difference that translates directly into disparate impact when cognitive cutoffs are applied.
Woods and Patterson (2023), writing in the Journal of Occupational and Organizational Psychology, noted that cognitive ability tests "have the potential to both maintain and exacerbate social inequality" in access to higher professions. Patterson specifically questioned "how cognitive ability tests are actually used in recruitment into professional and graduate occupations."
Legal scrutiny is tightening. The EEOC's 2023 guidance on algorithmic selection applies directly to cognitive screening tools. The Mobley v. Workday class action challenged automated hiring systems for discriminatory impact. State-level bias audit laws in Illinois, Colorado, and New York City now mandate regular adverse impact analysis for automated hiring tools.
Responsible employers have three primary mitigation strategies:
- Banding: Rather than strict rank-ordering by cognitive score, treat scores within a band (typically one standard error of measurement) as equivalent and select among them using other criteria
- Composite scoring: Weight cognitive results alongside structured interviews, work samples, and personality measures so that no single test drives selection disproportionately
- Structured interviews alongside cognitive tests: Schmidt and Hunter's combined validity of r=.63 reduces reliance on any single measure while improving predictive accuracy
The Unilever case study is instructive here. Their shift to cognitive games actually improved diversity by 16% — suggesting the adverse impact problem isn't inherent to measuring cognitive ability, but to how it's measured and weighted. Gamified assessments may reduce stereotype threat, and composite scoring dilutes the impact of any single measure's group differences.
For a deeper dive into legal compliance frameworks, see our comprehensive guide to pre-employment cognitive testing for HR leaders.
From Rankings to Action: What Smart Employers Do Differently

The 360-occupation data gives employers a map, not a GPS. Here is how the most sophisticated organizations translate cognitive research into hiring advantage.
Set thresholds, not ceilings. Use the Wolfram data and Gottfredson's threshold research to establish cognitive floors for role families. Software engineering might require an estimated IQ equivalent of 105-110. Quantitative finance might set the floor at 115. But avoid setting ceilings — the cognitive plateau data confirms that more is not always better.
Measure shape, not just height. Barbara's biotech discovery at the top of this article isn't unusual. Organizations that track sub-domain scores alongside composites consistently find that ability tilt predicts role fit more precisely than overall level. If you're hiring for a role that demands spatial reasoning, weight spatial reasoning. If you need verbal fluency, measure verbal fluency. This is what the SMPY longitudinal data has been telling us for two decades.
Combine instruments. No single assessment, however valid, should drive a hiring decision. The strongest evidence supports cognitive screening + structured interviews + work samples as a composite approach. Jensen (2025) estimates that selecting software engineers with IQs above 115 yields a 10-30% productivity boost — but only when combined with technical assessment, not as a standalone filter.
Audit for adverse impact annually. If your cognitive screening produces disparate impact ratios below the EEOC's four-fifths rule, the test isn't necessarily invalid — but you need documented business necessity and evidence that less discriminatory alternatives were considered. Banding and composite scoring are your first-line mitigations.
Dr. Sidi S. Kone captured the nuance well: "Interviews at BCG or McKinsey are not IQ tests. They are risk assessments." The same principle applies broadly. Cognitive data informs risk assessment. It doesn't replace judgment.
The Skills-Based Hiring Caveat
As GPA screening drops (from 73% to 42% since 2019) and degree requirements loosen, cognitive assessment is filling the gap. The broader pre-employment assessment market, which includes personality, skills, and situational judgment tools alongside cognitive tests, reached $10.97 billion in 2024 (projected $23.01 billion by 2032). Employers are buying cognitive data. The question is whether they are using it wisely: setting floors rather than ceilings, measuring shape rather than just height, and managing adverse impact proactively rather than waiting for litigation to force the issue.
Cognitive screening works best when used as Barbara used it: not to find the highest score, but to find the right cognitive shape for the right role.
Whether you are an employer designing a smarter selection process or a professional weighing whether your cognitive strengths align with a target role, the Wolfram data offers a starting point rather than a verdict. The rankings tell you where occupations cluster on average. What they cannot tell you is where you sit within that distribution, or how your particular mix of verbal, quantitative, and spatial strengths maps to specific job demands.
That is exactly the question a well-designed cognitive assessment answers. Understanding your ability tilt across verbal, quantitative, spatial, and processing speed domains tells you more than a single number ever could. Our scoring methodology is built to surface that shape, not just a composite. From there, find careers matching your IQ with our Career-IQ Matcher.
Map Your Cognitive Profile to 360+ Occupations
Discover your ability tilt across verbal, quantitative, spatial, and processing speed domains. See which professional roles match your cognitive shape, not just your score.



