Why Your Company’s AI Strategy Is Probably Wrong: The Case for Skeptical Intelligence
Ted Ladd is Professor of Entrepreneurship & Innovation at Hult International Business School and Instructor of Platform Entrepreneurship at Harvard University. Priyanka Shrivastava is Professor of Marketing &Analytics and Associate Dean, DBA, at Hult International Business School.
While AI usage across industries is growing exponentially, most executives are solving the wrong problem. They’re obsessing over which AI tools to deploy when they should be questioning whether they can effectively use the information that those tools produce.
Here’s the uncomfortable truth from our research across 401 professionals in six industries: people with higher “Skeptical Intelligence” scores demonstrate significantly better AI-driven innovation and productivity.
The three-legged stool is wobbling
For decades, the leadership development industry has operated on a simple premise: IQ gets you hired and EQ gets you promoted. Raw cognitive ability (IQ) predicts success in well-defined problem spaces—the kind with clear feedback loops and discrete solutions. Emotional Intelligence (EQ) corrects for IQ’s interpersonal deficits, enabling leaders to build coalitions, read rooms, and regulate impulses.
Both remain necessary. Neither is sufficient in the age of artificial intelligence.
IQ optimizes for problems with knowable solutions. But most strategic decisions involve irreducible uncertainty, competing objectives, and information asymmetries. EQ mitigates the friction of human collaboration but has drifted toward a managerial culture that prizes psychological comfort over intellectual challenge. The result? Highly intelligent, emotionally attuned executives naively nodding along to AI-generated analyses.
The capability gap is structural: neither IQ nor EQ equips leaders to interrogate increasingly opaque algorithmic decisions. And sycophantic AI engines amplifythis problem.
Why AI makes this urgent today
AI systems now operate at an industrial scale across pricing, forecasting, hiring, credit allocation, and content recommendation. They’re also fallible in predictable ways:
- Biased training data that encodes historical discrimination
- Spurious correlations mistaken for causal relationships
- Brittle generalizations that take an average, regardless of the problem to be solved
- Confident assertions delivered in concrete language that trigger premature trust
None of these make AI useless. But as Kahneman observed, speed and haste are not synonyms. Leaders must introduce structured friction at critical junctures—moving quickly while creating deliberate checkpoints for interrogation.
Consider the innovation paradox: AI generates responses by averaging patterns in historical data. Everyone querying ChatGPT about “innovation in financial services” receives statistically similar outputs. The system is designed to report consensus, not discover novelty. True differentiation requires the human using AI to question and verify its output, not to add more bias.
Enter Skeptical Intelligence.
Defining Skeptical Intelligence: Not just critical thinking rebranded
Since Plato introduced it in 399 BCE and Dewey popularized the term in the 1930s, “critical thinking” has expanded to encompass intellectual analysis, self-reflection, emotional regulation, and dispositions. It’s simultaneously a process, a behavior cluster, a mindset, and an instinct. The term has become too elastic to operationalize.
Skeptical Intelligence is different: it’s a measurable, teachable framework with four discrete components:
- Curiosity with an edge
Asking “what would change my mind?” rather than “how do I prove I’m right?” - Epistemic humility
Accurate calibration of what you (and your models) know versus don’t know - Evidence discipline
Tracing claims to their data, methodology, and assumptions - Counterfactual imagination
Generating plausible alternatives and testing the original claim against them
Our research identified two empirically distinct factors through survey data: the ability to question new information and the ability to verify it within context. We’ve developed an initial measurement scale, the Skeptical Quotient (SQ), to assess these capabilities.
Preliminary findings reveal gender differences: men score higher on questioning, women on verification. This suggests team composition matters—optimal SImay require deliberate cognitive diversity rather than individual omnicompetence.
More significantly, regression analysis showed SI as a stronger predictor of innovativeness than EQ, with EQ’s effect becoming statistically insignificant. This doesn’t invalidate emotional intelligence—it remains crucial for team cohesion—but SI emerges as more critical for individual performance in AI-augmented work.
The psychological barrier to skepticism
Deploying SI requires something uncomfortable: acknowledging you might be wrong. In complex scenarios with nuanced information, rigorous questioning often leads to rejecting your initial ideas. If your sense of self-worth is fragile, this conclusion is painful enough to avoid asking the question entirely.
This aligns with research linking psychological safety and intellectual humility to improved reasoning. Organizations that penalize uncertainty or reward confident assertions create environments where SI atrophies.
A practical playbook
Critical-thinking scholarship has mapped this terrain extensively. We don’t need a new theory—we need a practical recipe that engages one’s intellect and rationality in the face of seemingly omniscient AI.
- Clarify the actual problem
AI engines optimize for engagement because user retention is their most accessible metric. This isn’t necessarily malicious, but it creates self-reinforcing bias toward user affirmation rather than truth-seeking. In other words, your AI engine is not trying to solve your problem; it is trying to get you to ask another question. Force yourself to clearly and specifically articulate the question you are actually trying to answer. - Surface and test assumptions
What must be true about the data, context, and scope for this AI response to hold? Where is the model consistently limited or biased? AI can help identify these assumptions and propose tests to explore them, but here, too, you must not only consider AI’s opinions. - Spot alternative causes
Are there adjacent theories that would reach different conclusions from identical data? AI excels at identifying these competing frameworks when promptedcorrectly. For example, instead of asking “When, why, and how would this idea succeed?” ask “When, why, and how would this idea fail?” - Generate counter-arguments
Can you, even with AI assistance, disprove the initial response? This is where your favorite AI engine’s memory is not helpful. Consider switching to another AI tool. - Make your own decision
Don’t outsource intellectual rigor. Treat AI as an assistant with potentially useful observations that feed into your human filter—not as an oracle.
None of these steps require coding ability or formal logic training. They require conceptual clarity, structured reasoning, and a willingness to challenge both the AI and yourself.
Institutionalizing SI: Beyond heroic individuals
Skeptical Intelligence scales when it’s structural, not just dependent on charismatic skeptics. Four mechanisms help:
Train beyond compliance. Replace generic “AI ethics” lectures with scenario workshops that pressure-test real models under shifting conditions. Make teams defend both the recommendation and the methodology. Force identification of potential bias sources—human and algorithmic.
Hire for humility. Reward candidates who say “I don’t know yet” and then demonstrate how they’d find out. Curiosity without swagger predicts actual innovativeness better than fluency with AI tools.
Reward constructive dissent. Embed “thoughtful disagreement” in product and performance reviews. Assign rotating Red Teams to major decisions whose explicit, celebrated role is to respectfully attempt to disprove recommendations.
Celebrate openly. Small achievements—asking penetrating questions, admitting uncertainty—deserve public recognition. This is where EQ can foster self-worth, which in turn is a foundation for healthy skepticism.
Many executives claim, “We already ask hard questions.” Often, that means asking familiar questions. SI formalizes the unfamiliar ones: questions that probe the scaffolding of a claim rather than its fit with existing plans.
Two futures
History will divide AI-powered business professionals into two categories:
Group one waved models and decisions through because outputs “looked right.” They achieved spectacular wins (celebrated) and avoidable disasters (quietly buried). They optimized for speed over accuracy.
Group two built institutional muscle around interrogation—fast, repeatable, teachable, disciplined questioning and validation. They captured more upside with fewer blindside hits. More importantly, they prepared for the next generation of AI tools.
Which group you belong to depends less on your AI budget than on your organization’s capacity for disciplined doubt.
The bottom line
IQ matters. EQ matters. But the scarcest executive asset in a world of plausible-sounding AI outputs is Skeptical Intelligence—the disciplined, curiosity-driven, humility-infused capacity to demand better questions and resist easy answers. SI doesn’t make you slower. It makes you more confident about when to accelerate.
