There is a question I keep hearing in different forms from senior leaders right now. It doesn't always get asked directly. Sometimes it surfaces in how a leadership team talks about a decision they've been circling for months. Sometimes it shows up in the gap between what a measurement system reports and what anyone in the room actually does with that information.
If we have more data than ever, why does it feel harder to decide?
It's a fair question. And the answer has less to do with the data itself and more to do with what organizations have come to expect from it.
The prevailing belief in business has been simple: more data, better decisions. Invest in data and analytics. Build dashboards. Automate reporting. The assumption was straightforward: if leaders had access to more complete, more real-time, more granular information, the quality of their decisions would improve and the speed would increase.
AI has accelerated that narrative dramatically. Generative AI tools can produce scenarios, summaries, and projections at a pace that would have been unimaginable five years ago. A recent HBR study documented this shift in concrete terms: after adopting generative AI, employees at one technology company worked at a faster pace, took on a broader scope of tasks, and extended their working hours, often without being asked to. The tools didn't simplify the work. They expanded it.
And that expansion has introduced a problem that very few organizations have named clearly. The volume of analytical output has increased, but the connection between that output and actual decisions has not kept pace. Incrementality models get built, debated, and rebuilt. Attribution frameworks generate reports that raise new questions rather than resolve existing ones. Dashboards refresh constantly, but the conversations they're supposed to inform keep circling the same territory.
What's happened is a subtle but consequential shift in the relationship between data and decision-making. Data was supposed to be an input to judgment. Increasingly, it's become a prerequisite for action. And those are very different things.
There's a pattern I've observed across industries and organization sizes, and it tends to follow a predictable arc.
A senior leader faces a decision with real visibility and real consequences. A brand repositioning. A market entry. A structural change. The kind of call that, if it goes wrong, will be traced back to them specifically. So they do what any thoughtful leader would do. They ask their teams to gather the relevant information.
The difference now is that the information never stops arriving. Every question can be modeled. Every assumption can be stress-tested. Every angle can be explored with a new data set or a new AI-generated scenario. The marginal cost of one more analysis has dropped to nearly zero.
But the marginal cost of one more delay has not. It's gone up.
And yet many organizations have developed a culture where acting on incomplete information feels irresponsible. Where the presence of sophisticated tools has quietly raised the bar for what counts as "ready." Where a leader who says "I've seen enough, let's move" risks being perceived as reckless rather than decisive.
I've started calling this the certainty trap. Not a shortage of information, but a set of organizational expectations that treat uncertainty as a problem to be solved rather than a permanent condition to be navigated.
This is where the conversation needs to shift.
Certainty is the belief that you know what will happen. It's a prediction. And in any environment with real complexity, real competition, and real human behavior, it is almost always an illusion. No amount of data will eliminate the fundamental uncertainty of strategic decision-making. The variables are too numerous, the interactions too dynamic, and the future too contingent on factors that no model can fully anticipate.
Confidence is something else entirely. Confidence is the assessment that you have seen enough signal, understood the key trade-offs, and developed a clear enough read on the situation to act. It doesn't require knowing what will happen. It requires trusting your judgment about what to do next.
Herbert Simon described this decades ago with the concept of satisficing: the recognition that in complex environments, the pursuit of an optimal decision is often not only impractical but counterproductive. The more effective approach is to identify the threshold of adequacy, the point at which you have enough clarity to act, and to move when you reach it.
Colin Powell arrived at a similar principle from a very different context. His guidance was to act when you have between 40 and 70 percent of the information you'd ideally want. Below 40 percent, you're guessing. Above 70 percent, the opportunity has likely already passed. He called it avoiding analysis paralysis.
Most organizations I work with are operating well above 70 percent. Not because the information isn't sufficient, but because the culture, the tools, and the expectations around data have made it feel unsafe to stop gathering.
Here is where I want to push the conversation somewhere that doesn't get enough attention.
When organizations talk about needing "more data" before a decision, they almost always mean more quantitative data. More survey responses. A larger sample. Another round of A/B testing. A different attribution model. The implicit assumption is that confidence comes from scale and statistical precision.
But in my experience, the opposite is often true. The leaders I've seen make the strongest decisions under genuine uncertainty are the ones who have invested in qualitative depth. Not instead of quantitative rigor, but before it, and often as the thing that makes quantitative work actually useful.
Here's why. Quantitative data is very good at telling you what is happening. How many, how often, how much. It measures patterns at scale. But it is structurally limited in its ability to tell you why something is happening, what it means in context, or what someone would actually do in a real situation with real trade-offs.
Those are the questions that matter most in high-stakes strategic decisions. And they are the questions that qualitative research, done well, is uniquely designed to answer.
When a leadership team is trying to understand whether a brand reposition will resonate, a market entry will find traction, or a product change will alienate a core audience, the most valuable signal is rarely in the dashboard or in a survey. It's in the in-depth interview where a customer articulates a tension the team hadn't considered. It's in the focus group where a pattern of language reveals how people actually think about a category. It's in the ethnographic observation that surfaces a behavior no survey question would have captured.
This is what I mean by directional clarity. Not certainty. Not statistical significance in every dimension. But a clear, grounded read on the human reality behind the numbers, strong enough to inform a confident decision and specific enough to guide what you do next.
The organizations that move well in uncertain environments have figured this out. They lead with qualitative understanding to frame the right questions, then validate and size at scale. They don't wait for quantitative completeness before they act. They use qualitative depth to develop the conviction that allows them to act before completeness arrives.
That sequencing matters more than most people realize.
When organizations default to quantitative completeness as the standard for decision-readiness, several things tend to happen.
Decisions slow down. Not because the information isn't there, but because the information keeps generating more questions. Each round of data opens new threads. Each new model introduces new variables. The analysis becomes self-perpetuating, and the decision keeps getting pushed to the next review cycle.
The connection between research and action breaks down. Teams produce enormous volumes of insight that sit in decks and dashboards without ever changing a strategic direction.
A 2025 study of 500 senior decision-makers found that 77 percent only sometimes or rarely question the dashboard data they rely on daily, even though 67 percent worry that over-reliance on those same dashboards is causing them to miss critical opportunities.
And that hesitation at the top doesn't stay at the top. Team confidence erodes. When senior leaders hesitate, the signal travels fast. Teams start hedging. Middle management waits for confirmation before executing. The organization develops a posture of caution that looks like thoughtfulness from the inside but reads as indecision from the outside.
And perhaps most importantly, the qualitative signal that would actually accelerate the decision gets deprioritized. Because in a culture that equates rigor with quantitative scale, in-depth interviews and immersive research can feel like a luxury or a preliminary step rather than what they often are: the fastest path to the clarity that leadership actually needs.
The shift I'm describing is not about abandoning data or ignoring quantitative evidence. It's about reorganizing the relationship between data, judgment, and action so that the system actually produces decisions rather than just output.
In practice, this means starting every research effort with a clear articulation of the decision it needs to inform. Not "what do we want to learn?" but "what do we need to know in order to act?" That single reframe changes what gets studied, how it gets studied, and how quickly the work becomes useful.
It means sequencing qualitative depth first, before scaling. Using interviews, immersive research, and cultural signal detection to develop a strong directional read, then designing quantitative work to validate and size what the qualitative has already surfaced. This approach doesn't slow down the process. It prevents the far more costly cycle of running large-scale studies that answer the wrong questions or generate findings no one knows how to act on.
It means giving senior leaders explicit permission to act on directional clarity.
When a CMO or VP of Strategy says "the signal is clear enough, we're moving," that needs to be recognized as leadership, not impatience. It tells the organization that judgment is valued, that analysis serves a purpose and has an endpoint, and that the pursuit of certainty is not the same as the pursuit of quality.
And it means evaluating AI, analytics, and measurement systems by a single standard: do they help leaders get to confident decisions faster? Not more data, not more dashboards, not more scenarios. Faster, clearer, more confident decisions. Any tool or process that doesn't meet that standard is adding noise, not clarity, regardless of how sophisticated it looks.
Every organization now has access to roughly the same data infrastructure, the same AI capabilities, and the same analytical tools. The playing field on information has largely leveled.
What hasn't leveled is judgment. The ability to read a situation clearly, distinguish signal from noise, and act with conviction when the picture is incomplete. That's still rare. And it's still the thing that separates the organizations that move from the ones that circle.
The leaders who will build that advantage are the ones who recognize that clarity does not require certainty. Who invest in qualitative depth not as a nice-to-have but as the foundation for confident action. Who design their research, their teams, and their cultures around the principle that insight only matters if it moves something.
That's what experienced judgment looks like in practice. And it's a skill that no dashboard, no AI tool, and no incrementality model is going to develop for you.
---
This article was originally featured in the Data Done Differently newsletter on LinkedIn.