Employer Cognitive Tests Decoded: CCAT, SHL, Wonderlic and What Elite Firms Actually Use

A pre-employment cognitive ability test is a timed assessment that measures how quickly and accurately you process verbal, numerical, and abstract information under pressure. Companies including McKinsey, Goldman Sachs, Amazon, and Deloitte use these tests -- 76% of organizations with over 100 employees now use cognitive ability tests to screen candidates before a human ever reads a resume. The five dominant tests -- CCAT, Wonderlic, SHL Verify, McKinsey Solve, and Pymetrics -- each measure overlapping but distinct cognitive skills, carry different scoring thresholds, and require different preparation strategies.
Key Takeaways
- 76% of employers now use cognitive ability assessments as part of the hiring process, and 75% of Fortune 500 firms rely on psychometric screening
- Preparation works -- retest-only practice yields gains of d=0.26, rising to d=0.64 with structured coaching, meaning coached preparation can shift your score from the 50th to the 74th percentile
- Each test is different -- CCAT gives you 15 minutes for 50 questions while Wonderlic gives 12 minutes, and McKinsey Solve tracks your process, not just answers
- The salary stakes are real -- roles gated by cognitive testing at elite firms pay 3-8x the U.S. median income of $51,370
- Your baseline matters -- take a cognitive assessment before your employer test to identify weak areas and calibrate realistic expectations
The Landscape: Why Every Elite Firm Now Tests Your Brain
The cognitive assessment market exceeded $2 billion in 2024 and is projected to more than double by 2035, according to industry analysts. That growth reflects a fundamental shift in how companies filter talent. Traditional resume screening fails at scale -- McKinsey receives roughly 200,000 applications for approximately 2,000 positions each year. That is a 1% acceptance rate, and no hiring team can evaluate 200,000 resumes by hand.

Cognitive tests solve this problem by creating an objective, scalable filter. The research supporting their use is substantial: Schmidt and Hunter's landmark 1998 meta-analysis found that general mental ability (GMA) predicted job performance with a validity coefficient of r=.51 for medium-complexity jobs -- higher than any other single predictor including interviews, work experience, or education level.
That said, the picture has grown more nuanced. Sackett et al. (2022) revised some of those estimates downward, finding that structured interviews now compete with cognitive tests at r=.42. And Woods and Patterson (2024) raised legitimate concerns about how high-volume graduate screening may disadvantage candidates from lower socioeconomic backgrounds who lack access to preparation resources.
The practical implication for you? These tests are not going away. They are expanding. And your competition is preparing for them whether you are or not.
The Five Tests You Will Actually Encounter
Not all cognitive tests are interchangeable. The test your target employer uses determines your entire preparation strategy. Studying CCAT math tricks will not help you with McKinsey's ecosystem simulation, and Wonderlic verbal strategies are irrelevant to Pymetrics' reaction-time games.
Pre-Employment Cognitive Tests: Head-to-Head
| Format | Time Limit | Elite Threshold | Key Employers | |
|---|---|---|---|---|
| CCAT | 50 Qs: verbal, math, spatial | 15 minutes | 31+ (top 20%) | Deloitte, PwC, mid-market firms |
| Wonderlic | 50 Qs: mixed cognitive | 12 minutes | 30+ (~90th %ile) | NFL, various corporate |
| SHL Verify | Numerical/verbal/inductive | Timed per section | 60th-80th %ile | Goldman Sachs, JPMorgan |
| McKinsey Solve | Redrock Study + Sea Wolf | ~70 minutes | ~20% pass rate | McKinsey & Company |
| BCG Online Assessment | Casey Chatbot + Cognitive Test | ~60 minutes | Algorithmic | BCG |
| Pymetrics/Harver | Gamified cognitive battery | ~25 minutes | Algorithmic | Unilever, Amazon |
Data compiled from employer disclosures and candidate reports, 2024-2025
CCAT: The Corporate Workhorse
The Criteria Cognitive Aptitude Test is the most common pre-employment cognitive assessment in corporate America. Fifty questions in fifteen minutes means you have eighteen seconds per question -- and fewer than 1% of test-takers finish all fifty. The average score is 24 out of 50. Elite employers typically want a 31 or higher, placing you in the top 20%.
The CCAT breaks down into three roughly equal sections: verbal reasoning (vocabulary, analogies), math and logic (word problems, number series), and spatial reasoning (pattern matching, rotations). The critical insight most candidates miss is that there is no penalty for wrong answers. Skip anything that takes longer than 20 seconds and guess. Every blank is a guaranteed zero; a guess gives you a 20-25% chance.
Wonderlic: Faster and More Verbal Than You Expect
The Wonderlic Personnel Test compresses 50 questions into just 12 minutes -- even tighter than the CCAT. The average score hovers around 20 out of 50, and a 30 puts you at approximately the 90th percentile. Where the Wonderlic catches candidates off guard is its weighting: roughly 40% English-based questions and 40% math. Candidates who are strong verbal thinkers consistently underprepare for the quantitative sections.

The Wonderlic also follows a difficulty curve. Early questions are straightforward, and later questions get progressively harder. A common mistake is spending too long on difficult questions at the end while leaving easier points on the table earlier. Work front to back, and do not let perfectionism on question 38 cost you questions 39 through 50.
The biggest practical difference between Wonderlic and CCAT? Three fewer minutes. That time pressure differential is enormous. If you are preparing for a Wonderlic, every practice session must be rigidly timed. Building speed under authentic pressure conditions matters more than learning content you probably already know.
Beyond its NFL fame -- where teams use it to evaluate quarterback decision-making speed -- the Wonderlic is widely adopted in healthcare, financial services, and staffing agencies. Originally developed in 1936 by E.F. Wonderlic as a quick measure of general cognitive ability, it remains one of the most-administered employment tests in the world, with over 200 million tests given to date across industries ranging from entry-level retail management to C-suite executive hiring.
McKinsey Solve: Where Process Trumps Answers
McKinsey's proprietary assessment is unlike any other test on this list. The "Solve" game presents candidates with two gamified tasks -- currently Redrock Study (a data-interpretation and hypothesis-testing exercise) and Sea Wolf (a strategy and resource-management simulation) -- across roughly 70 minutes. The pass rate hovers around 20%.
Here is what makes Solve genuinely different: McKinsey tracks your decision-making process, not just your final answers. You can arrive at the correct answer through an incorrect methodology and still fail. The system monitors how you gather information, which variables you prioritize, and how you adjust when conditions change. This means brute-force memorization and pattern-matching shortcuts are specifically designed to be caught.
Preparation for Solve requires building genuine analytical habits -- identifying causal relationships, testing hypotheses systematically, and demonstrating structured thinking under ambiguity.
BCG, Pymetrics, and Gamified Assessments

BCG dropped Pymetrics in 2024 and now uses the Casey Chatbot -- an AI-driven conversational case interview -- paired with a BCG Cognitive Test that evaluates numerical, verbal, and abstract reasoning in a more traditional timed format. Together, these assessments replace the gamified approach with a structured evaluation that more closely mirrors actual consulting work.
Unilever, Amazon, and other large employers still use Pymetrics (now part of Harver), a 25-minute battery of neuroscience-based games measuring attention, memory, effort, risk tolerance, and emotional processing. Unlike the CCAT or Wonderlic, there are no "right answers" in the traditional sense. Pymetrics generates a cognitive and behavioral profile and matches it against the trait patterns of successful employees in the role you are applying for.
This creates a paradox for preparation. You cannot study for a personality match. But you can ensure you are well-rested, focused, and performing at your genuine baseline rather than at a stress-degraded level. The research on test-day performance is clear: sleep deprivation alone can suppress cognitive performance by 10-15%.
Pymetrics also raises equity questions. The algorithmic matching system has faced criticism for potential bias, though Harver claims regular bias audits. For candidates, the practical advice is straightforward: play the games in your optimal cognitive state and be authentic. Gaming a personality match is a losing proposition even if you succeed -- you will end up in a role that does not fit.
SHL Verify: The Banking Standard
Goldman Sachs, JPMorgan, and most major banks use SHL's suite of numerical reasoning, verbal reasoning, and inductive reasoning tests. SHL stands apart from CCAT and Wonderlic in format: instead of a single mixed test, you take separate timed sections for each cognitive domain. This means a weak area cannot be masked by strength in another.
Target scores vary by role, but most competitive positions require scoring between the 60th and 80th percentile on each section. SHL also uses verification testing -- you may take an unsupervised online version first, then retake a supervised version at the interview stage. Significant score drops between the two trigger flags. Since each test uses a different scoring scale, our IQ Score Converter can help you translate between Wechsler, Cattell, and standardized test equivalents.
What Preparation Actually Does (and Does Not Do)
This is where intellectual honesty matters. The meta-analytic evidence on practice effects is strong but bounded.
Hausknecht et al. (2007) analyzed 107 samples totaling 134,436 participants and found a practice effect of d=0.26 from retest exposure alone and up to d=0.64 with structured coaching. In practical terms, that coaching effect could move you from the 50th percentile to approximately the 74th percentile. That is a meaningful shift -- potentially the difference between rejection and an interview.
Scharfen, Peters, and Holling (2018) confirmed gains of up to 0.5 standard deviations but found they plateau by the third administration. Beyond three practice sessions, returns diminish sharply. This means a focused, structured preparation period of 2-3 full-length practice tests delivers most of the available benefit.
The critical framing: these gains reflect procedural learning, not genuine intelligence improvement. Preparation removes the artificial suppression caused by test anxiety, unfamiliar formats, and poor time management. You are not getting smarter -- you are revealing the cognitive ability you already have. This is good news. It means preparation is not cheating; it is calibration.
What percentage of CCAT test-takers complete all 50 questions?
What Does NOT Transfer
Melby-Lervag, Redick, and Hulme (2016) reviewed 87 publications with 145 comparisons and found no convincing far transfer from working memory training to fluid intelligence. Brain training apps like Lumosity and dual n-back games produce narrow, task-specific gains that do not generalize to employment cognitive tests. Do not waste your limited preparation time on them.
The effective preparation stack looks like this:
- Establish your baseline with a tool like IQ Career Lab's free assessment to identify your starting point across verbal, numerical, and spatial reasoning
- Identify your weakest question type (verbal, numerical, or spatial) and drill it specifically
- Take two more timed practice tests to build format familiarity and time management skills
- Stop -- beyond three practice sessions, additional preparation has minimal measurable impact
The Stakes: Why This 15-Minute Test Shapes Your Career Trajectory
The salary differential between roles that require cognitive screening and those that do not is staggering.
A first-year quantitative researcher at Citadel or Jane Street earns $300,000-$400,000+, and summer interns at top quant firms pull in over $5,000 per week. First-year investment banking analysts at Goldman Sachs earn $170,000-$190,000 in total compensation (base salary of roughly $110,000 plus bonus). Even undergraduate consultants at McKinsey, BCG, or Bain start between $113,000 and $122,000.
Compare that to the U.S. median salary of $51,370 (BLS, 2024). The roles gated by cognitive testing pay 3-8x the national median. And the career growth compounds: data scientists are projected to see 34% job growth through 2034, and software developers 15% -- both multiples of the economy-wide 3.1% average.
This is not about a single test score defining your worth. It is about a 15-minute window determining whether your application makes it past the algorithmic filter to a human decision-maker. The cognitive thresholds for investment banking and similar elite roles are real, documented, and worth understanding.
The coaching effect on cognitive test scores can move a candidate from the 50th to the 74th percentile -- often the difference between rejection and an interview.
Per-Test Tactical Strategies
CCAT: Speed Over Perfection
- No wrong-answer penalty: Guess on every question you skip. Blanks are guaranteed zeros.
- Time budget: 18 seconds per question. If you hit 25 seconds, skip and return later.
- Section rotation: Verbal questions tend to be fastest; start there if the format allows free navigation.
- Target score: Aim for 28-32 answered correctly out of 35-40 attempted. Do not try to finish.
Wonderlic: Front-Load Your Points
- Difficulty ramps: Early questions are easy points. Never sacrifice them for a hard question at the end.
- Math preparation is critical: The 40% math weighting catches verbal-dominant candidates off guard.
- 14 seconds per question: Even tighter than CCAT. Build speed through timed practice, not content review.
McKinsey Solve: Think Out Loud (Internally)
- Process matters: The system monitors your decision pathway. Systematic exploration beats lucky guessing.
- Redrock Study: Focus on interpreting data carefully and testing hypotheses before jumping to conclusions. The game tracks how you gather and weigh evidence.
- Sea Wolf: Manage resources strategically. Variables interact -- changing one affects others. Demonstrate structured planning over impulsive optimization.
SHL: Section-Specific Preparation
- Numerical reasoning: Practice interpreting charts, tables, and graphs under time pressure. The math itself is rarely hard -- the data interpretation is.
- Verbal reasoning: True/False/Cannot Say format requires precise reading. "Cannot Say" is the answer more often than candidates expect.
- Verification tests: Your supervised score must match your unsupervised score. Never have someone else take the initial test.
Benchmark Yourself Before the Pressure Is Real

The single worst time to discover your cognitive baseline is during a high-stakes employer assessment. Ashley -- from our opening -- would have been far better served taking a practice cognitive assessment weeks before her McKinsey email arrived, not scrambling to prepare with 48 hours on the clock.
A pre-assessment accomplishes three things. First, it reveals which cognitive domains are strong and which need targeted work. Someone scoring in the 90th percentile on verbal reasoning but the 45th on numerical reasoning has a clear, actionable preparation plan. Second, it reduces test anxiety by making the format familiar. Hausknecht's research shows that format exposure alone accounts for a significant portion of practice effects. Third, it provides a realistic calibration of where you stand relative to the competitive thresholds for your target roles.
The IQ Career Lab assessment measures the same cognitive domains that employer tests target -- processing speed, working memory, pattern recognition, and verbal reasoning -- in a lower-stakes environment where your results do not go to anyone but you.
The Honest Truth About Cognitive Testing
These tests measure something real. The correlation between general mental ability and job performance is among the most replicated findings in industrial-organizational psychology. But a 15-minute timed test also introduces noise: test anxiety, unfamiliar formats, sleep deprivation, and unequal access to preparation resources all suppress scores below true ability.
Preparation does not give you an unfair advantage. It removes unfair disadvantages. The coaching effect of up to d=0.64 does not reflect artificial inflation -- it reflects the gap between your actual cognitive capacity and what a cold, unprepared test administration captures. Closing that gap is not gaming the system. It is showing up as yourself.
The Cognitive Screening Hiring Funnel
Application Submitted
Cognitive Assessment
Behavioral Interview
Case / Technical Round
Offer Extended
The candidates who pass these screens are not necessarily the smartest people in the applicant pool. They are the ones who understood the test format, managed their time, controlled their anxiety, and showed up prepared. That is a learnable skill set, and it starts with knowing your own cognitive baseline -- something IQ Career Lab is designed to help you establish before the stakes are real.
Frequently Asked Questions
Know Your Baseline Before the Stakes Are Real
Take a cognitive assessment that measures the same domains as employer tests -- processing speed, pattern recognition, verbal reasoning, and numerical ability. No employer sees your results.



