In my last post, "Confidence Is Not Certainty: The Case for Directional Clarity," I explored why organizations get stuck in the certainty trap. Gathering more data instead of developing the judgment to act on what they already have. This post picks up that thread with a practical question: if judgment is the differentiator, what does it actually look like to build AI into your expertise rather than outsource your thinking to it?
I spent a day at Marketing Brew's The Art & Science of AI in Marketing summit last week. Seven sessions. Speakers from agencies, platforms, in-house teams, and consultancies. The audience was mostly senior marketers trying to figure out where AI fits. Not in theory, but in their actual workflows, their actual decisions, their actual orgs.
Here's what I walked away thinking. The gap isn't between companies that have adopted AI and companies that haven't. Nearly everyone has adopted something. The gap is between the organizations using AI to do the same things faster and the ones using it to understand their customers differently. Efficiency versus intelligence. And right now, the vast majority are stuck on efficiency.
The gap isn't between companies that have adopted AI and companies that haven't. It's between the ones using AI for efficiency and the ones using it for intelligence.
That distinction matters because it connects directly to the argument I made in my last post. More tools, more dashboards, more AI-generated output. None of it resolves the fundamental challenge of strategic decision-making. The organizations that move well aren't the ones with the most sophisticated tech stacks. They're the ones with the clearest read on what their data actually means and the conviction to act on it.
The summit reinforced that at every turn.
Howard Pyle's session landed the line that stuck with me the longest: stop calling it vibe coding. Call it personal tooling.
His argument was that the intersection of your experience, your role, and AI is the thing that can't be replicated. Not the tools themselves. Those are commoditized. It's the tacit knowledge you bring to them. Knowing why a process exists. Reading between the lines. Anticipating consequences that don't show up in a model. He cited Stanford research showing 6 to 12 percent productivity growth for experienced workers in AI-exposed fields. Experience is an asset in this environment, not a liability.
That framing resonated because it describes exactly what I've been doing over the past several months. Not just using AI tools, but building with them. And that distinction is more important than most people realize.
I've been experimenting extensively across Claude, ChatGPT, and Gemini. Not casually. Deliberately, across the full spectrum of what my work actually requires: research synthesis, strategic analysis, document creation, data interpretation, collaboration, and increasingly, building custom tools and workflows.
Here's what I've found, and I want to be specific because the "which AI should I use" conversation is usually too abstract to be useful.
Each platform has scenarios where it excels. There are certain use cases where ChatGPT or Gemini will be the better fit for a particular task. But across the full range of activities that define my daily work (research and insights, strategic collaboration, project management, analytical rigor, and operational efficiency) Claude has become my primary platform and it isn't particularly close. That assessment doesn't even account for products like Claude Code and Cowork, which are operating in a different category entirely from what the other platforms offer for building personal infrastructure.
From an analytics standpoint specifically, Claude's ability to handle spreadsheets and structured data is night and day better than every other platform I've worked with. When I'm doing the kind of audience intelligence work that requires moving between qualitative signal and quantitative validation (exactly the sequencing I wrote about in my last post) the difference in output quality is significant.
Perplexity deserves a mention here because it remains best in class for publicly available research and real-time information retrieval. If I need to rapidly survey what's out there on a topic, it's still my first stop. But when the work requires handling the sheer volume and complexity of a standard analytical workflow (multiple data sources, long documents, layered synthesis) it's not built for that.
I'll be direct about the limitations too. Data limits and document capacity are real constraints with Claude and probably my single biggest frustration with the platform overall. But its ability to actually read, interpret, and analyze what you give it is meaningfully stronger than what I've experienced with ChatGPT for the same tasks.
The AI you choose matters less than what you know how to do with it. And what you know how to do with it is a function of the tacit knowledge you've accumulated.
The reason I'm sharing this level of detail isn't to write a product review. It's because the tool selection question is inseparable from the larger argument about judgment and expertise. The AI you choose matters less than what you know how to do with it. And what you know how to do with it is a function of the tacit knowledge and professional experience you've accumulated. That's the stuff Howard Pyle was talking about.
My biggest focus over the next month is pushing further into building. More coding, more custom tooling, more infrastructure that encodes my own research methodology. I wouldn't call myself an engineer. But AI has made the learning curve dramatically more accessible for people like me who sit further along the analysis spectrum. That shift from using AI to building with AI is where I think the real professional leverage is going to come from.
Sarah Evans made a case that should have been uncomfortable for a lot of people in the room. Sixty percent of searches now result in zero clicks. The homepage isn't the homepage anymore. Discovery is increasingly happening inside AI-generated summaries before anyone reaches your site.
She laid out a framework for what she calls Generative Engine Optimization, the practice of structuring content not for traditional search rankings but for citation by large language models. Her system moves through four layers: prompt, retrieval, citation, and refresh. And the metrics that matter are no longer clicks and impressions. They're citation frequency, answer presence percentage, and LLM-generated traffic. By 2027, she expects agentic commerce metrics (AI agents mediating transactions) to become the standard.
The connection to audience intelligence is direct. The upstream research I do, understanding who audiences are, what they care about, where the competitive whitespace sits, now informs not just what content a brand should create but what that brand should be the answer to. That's a meaningful shift. The question used to be "what should we publish?" Now it's "what should we be known for in the places where AI is doing the recommending?"
Dipin Oberoi's session mapped the evolution I've been living professionally for over a decade. The shift from traditional social monitoring (keyword-based, reactive, volume-focused) to AI-powered social intelligence that detects emotion, intent, and emerging themes before they surface in performance metrics.
The stat that should stop every CMO in their tracks: less than one percent of social data is typically analyzed by brands. Eighty percent of consumer data is unstructured. Organizations are sitting on enormous volumes of signal and doing almost nothing with it.
He walked through a Walgreens case study that illustrated this well. Their team scanned over 127,000 unstructured conversations across Reddit, TikTok, X, and beauty forums. The AI-powered analysis identified unmet demand for clean beauty and inclusive shade ranges. They adjusted their beauty assortment based on those findings and saw an 18 percent category sales lift in test markets.
Monitoring tells you what happened. Intelligence tells you what's about to matter.
That's not monitoring. That's intelligence. And the distinction matters because monitoring tells you what happened. Intelligence tells you what's about to matter.
This is my lane. The work I've done for brands like YouTube, Citi, Duracell, and Jack Daniel's is exactly this. Turning unstructured audience data into strategic decisions. AI is making this work faster and more accessible, but the human layer (knowing which questions to ask, how to frame insights for decision-makers, what action to recommend) is still what separates useful intelligence from a pile of data.
Oberoi's best line captured it: "Intelligence without action is just data." Which is another way of stating the core argument from my last post. The goal isn't more information. The goal is directional clarity that actually moves something.
Telly Wong from IW Group showed what insight-first AI execution looks like in practice. McDonald's ran its first AI-driven campaign targeting AAPI consumers for the Grandma McFlurry, and the origin of the campaign wasn't a model or a prompt. It was an audience insight: 24 percent of AAPI households are multigenerational, and many U.S.-born AAPIs struggle to communicate with foreign-born grandparents because of language barriers.
The AI application (translation tools that bridged that generational and linguistic gap, with McDonald's as the shared cultural touchpoint) was the execution layer. But without the human insight about the audience, there is no campaign. The intelligence came first. The technology made it scalable.
That sequence keeps showing up, and it's not a coincidence. It's the pattern that separates organizations that use AI well from organizations that just use AI.
David DiCamillo presented data from a WSJ Intelligence and Code and Theory survey of 800 C-suite leaders that put hard numbers on a problem I see constantly. Ninety-four percent of executives say customer experience drives business success. Ninety-three percent admit their digital experience is fundamentally broken. Companies mastering digital CX generate 30 percent more revenue than competitors stuck on basic personalization, but 88 percent say AI-driven personalization remains more promise than reality.
The barriers to CX maturity aren't technological. They're organizational: leadership misalignment at 49 percent, creative talent gaps at 44 percent, and silos at 43 percent. Dan Gardner's framing was sharp. Most companies are just automating their existing mistakes faster.
This is the upstream argument for audience intelligence as a strategic function. Brands cannot build emotionally intelligent customer experiences without understanding their customers first. Audience intelligence is the foundation layer that most companies skip when they bolt AI onto broken journeys. The gap isn't tools. It's insight.
There is a clear through-line that matches what I've been seeing in my own work and my own experimentation. The marketers who will win aren't the ones adopting AI the fastest. They're the ones building AI into their existing expertise in ways that are intentional, grounded, and tied to real decisions.
The tech stack is exploding. Every organization now has access to roughly the same AI capabilities. What hasn't leveled is the judgment to know what to build, which questions to ask, and when you've seen enough signal to move.
That judgment comes from experience. It comes from understanding audiences at a depth that no dashboard is going to provide. And it comes from doing the hard, often qualitative work of developing directional clarity before you try to scale.
The tools are extraordinary. I use them every day, and they've fundamentally changed what I can accomplish as a solo practitioner building a consultancy. But the tools won't tell you what to build. That's still on you.
And that's the part that matters most.