The Tools Won't Tell You What to Build: Why AI Judgment Beats AI Adoption

AI & Measurement

Here's what I took away from a full day at Marketing Brew's The Art & Science of AI in Marketing event: nearly every marketer in the room had adopted AI, and most were still working out what to do with it.

That's not a knock. It's the actual state of things. The adoption race is over and everyone has roughly the same tools now. The new question, the harder question, is what you build with them. And the organizations pulling ahead aren't the ones with the most sophisticated tech stacks. They're the ones who've learned to use AI for intelligence, not just efficiency. That distinction is what the whole day kept coming back to.

The gap isn't between companies that have adopted AI and companies that haven't. It's between the ones using AI for efficiency and the ones using it for intelligence.

That matters because it gets at the fundamental challenge of strategic decision-making right now. More tools, more dashboards, more AI-generated output. None of it resolves the question of what to actually do next. The organizations driving action are the ones with the clearest read on what their data actually means and the confidence to act on it.

The event reinforced that at every turn.

Everyone Has the Same AI Tools. Not Everyone Knows What to Build With Them.

Howard Pyle's session stuck with me the longest: stop calling it vibe coding. Call it personal tooling.

His argument was that the intersection of your experience, your role, and AI is the thing that can't be replicated. Not the tools themselves. Those are commoditized. It's the tacit knowledge you bring to them. Knowing why a process exists. Reading between the lines. Anticipating consequences that don't show up in a model. He cited Stanford research showing 6 to 12 percent productivity growth for experienced workers in AI-exposed fields. Experience is an asset in this environment, not a liability.

That framing resonated because it describes exactly what I've been doing over the past several months. Not just using AI tools, but building with them. And that distinction is more important than most people realize.

What I've Actually Found Using Claude, ChatGPT, Gemini, and Perplexity.

I've been experimenting extensively across Claude, ChatGPT, Gemini, and Perplexity. Not casually. Deliberately, across the full spectrum of what my work actually requires: research synthesis, strategic analysis, document creation, data interpretation, collaboration, and increasingly, building custom tools and workflows.

Here's what I've found, and I want to be specific because the "which AI should I use" conversation is usually too abstract to be useful.

Each platform has scenarios where it excels. There are certain use cases where ChatGPT or Gemini will be the better fit for a particular task. But across the full range of activities that define my daily work, Claude has become my primary platform and it isn't particularly close. That assessment doesn't even account for products like Claude Code and Cowork, which are operating in a different category entirely from what the other platforms offer for building personal infrastructure.

Here's what I'm actually doing with these tools on a regular basis:

Analyzing qualitative transcripts at scale. I batch interview transcripts into Claude using structured analysis templates, extract themes across 10 to 20 interviews, then validate the AI-generated output manually. What used to take 20 hours of manual coding now takes under 4, with comparable quality and better consistency across interviews.

Processing thousands of rows of data for pattern recognition. I regularly work with large data sets where the task isn't just calculation but categorization, theme identification, and anomaly detection. Claude handles spreadsheets and structured data significantly better than any other platform I've tested. When I'm doing audience intelligence work that requires moving between qualitative signal and quantitative validation, the difference in output quality is meaningful.

Building a project management system inside AI. Rather than trying to make Asana or Monday.com do everything, I've started using Cowork to build an operating layer that pulls from the tools I'm already using in different ways: Notion for documentation, Slack for communication, Miro for collaboration. Cowork sits on top and automates recurring intelligence briefs, pulling together competitive signals, audience data, and category trends on a set cadence. This is the shift from using AI as a tool to building AI into your operating infrastructure.

Automating simple spreadsheet tasks. Column transformations, formula generation, data cleaning, reformatting outputs for different stakeholders. These are small tasks individually, but they compound. What used to take an hour of manual formatting now takes minutes.

Perplexity deserves a mention here because it remains best in class for publicly available research and real-time information retrieval. If I need to rapidly survey what's out there on a topic, it's still my first stop. But when the work requires handling the sheer volume and complexity of a standard analytical workflow, it's not built for that.

I'll be direct about the limitations too. Data limits and document capacity (notably PDFs) are real constraints with Claude and probably my single biggest frustration with the platform overall. But its ability to actually read, interpret, and analyze what you give it is meaningfully stronger than what I've experienced with ChatGPT for the same tasks.

The AI you choose matters less than what you do with it. And what you do with it is a function of the tacit knowledge you've accumulated.

The reason I'm sharing this level of detail isn't to write a product review. It's because the tool selection question is inseparable from the larger argument about judgment and expertise. That's what Howard Pyle was talking about.

My biggest focus over the next month is pushing further into building. More coding, more custom tooling, more infrastructure that encodes my own research methodology. I had a working foundation in coding and SQL before AI, but the tools have let me take real strides in analytical output: more sophisticated formulas, cleaner syntax, faster data processing that simply wasn't possible at this pace before. The quality of my work product has materially improved because the data is in better shape before I ever get to the analysis.

And that's the part that doesn't get talked about enough. When you aren't spending hours on the technical work of cleaning, formatting, and structuring data, you can spend that time on strategy and interpretation. The insight gets sharper because you're not exhausted by the time you reach it.

I wouldn't call myself an engineer. But AI has made the learning curve dramatically more accessible for people like me who sit further along the analysis spectrum. That shift from using AI to building with AI is where I think the real professional leverage is going to come from.

The Discovery Layer Has Already Shifted. Most Brands Haven't Noticed.

AI Overviews now appear in 25% of Google searches, and roughly two-thirds of those searches end without a click (BrightEdge AI Market Pulse, January 2026). The homepage isn't the homepage anymore. Discovery is increasingly happening inside AI-generated summaries before anyone reaches your site.

But here's the part most brands are missing: AI platforms still drive less than 1% of referral traffic (WARC Future of Media 2026). The impact right now is on visibility and consideration, not site visits. That gap is exactly what makes this moment strategically urgent. By the time AI referrals catch up to AI visibility, the brands already structured for it will own the answer space. The ones who wait will be playing catch-up against entrenched citations.

Sarah Evans laid out a framework for her version of Generative Engine Optimization, the practice of structuring content not for traditional search rankings but for citation by large language models. Her system moves through four layers: prompt, retrieval, citation, and refresh. The metrics that matter are no longer clicks and impressions. They're citation frequency, answer presence percentage, and LLM-generated traffic.

The generational dimension compounds this further. Gen Z browses TikTok and Instagram but validates on Reddit and Google before buying. Millennials toggle between social inspiration and trusted search. Gen X is efficiency-seeking. Boomers are search and review loyalists (Sensor Tower, State of Mobile 2026). GEO and AI search are layering on top of this stack, not replacing it.

A brand that optimizes for one surface is invisible to two or three generations of buyers.

The connection to audience intelligence is direct. The upstream research I do, understanding who audiences are, what they care about, where the competitive whitespace sits, now informs not just what content a brand should create but what that brand should be the answer to. The question used to be "what should we publish?" Now it's "what should we be known for in the places where AI is doing the recommending?"

Social Intelligence Is Moving From Monitoring to Prediction.

Less than 1% of social data is typically analyzed by brands. 80% of consumer data is unstructured, according to Oberoi. Organizations are sitting on enormous volumes of signal and doing almost nothing with it.

Dipin Oberoi's session mapped the evolution I've been living professionally for over a decade: the shift from traditional social monitoring (keyword-based, reactive, volume-focused), to AI-powered social intelligence that detects emotion, intent, and emerging themes before they surface in performance metrics.

The Walgreens case study he highlighted in his session illustrated this well. Their team scanned over 127,000 unstructured conversations across Reddit, TikTok, X, and beauty forums. The AI-powered analysis identified unmet demand for clean beauty and inclusive shade ranges. They adjusted their beauty assortment based on those findings and saw an 18% category sales lift in test markets.

Monitoring tells you what happened. Intelligence tells you what to do about it.

The strategic stakes behind this are significant. 80% of strategists say that the discipline is at a crossroads, and human-led research is explicitly positioned as the defense against what WARC calls "average AI thinking." (WARC Future of Strategy 2026) As AI lowers switching costs across every category simultaneously, the brands investing in genuine consumer understanding have a structural advantage the algorithm can't replicate. The ones relying on dashboards and volume metrics are exposed.

Oberoi's best line captured it: "Intelligence without action is just data." The goal isn't more information. The goal is directional clarity that actually drives action.

AI Creativity Still Starts With a Human Insight.

Telly Wong from IW Group showed what insight-first AI execution looks like in practice. McDonald's ran its first AI-driven campaign targeting AAPI consumers for the Grandma McFlurry, and the origin of the campaign wasn't a model or a prompt. It was an audience insight: 24 percent of AAPI households are multigenerational, and many U.S.-born AAPIs struggle to communicate with foreign-born grandparents because of language barriers.

The AI application, translation tools that bridged that generational and linguistic gap with McDonald's as the shared cultural touchpoint, was the execution layer. But without the human insight about the audience, there is no campaign. The intelligence came first. The technology made it scalable.

That sequence matters more now than it did a year ago. 57% of agency leaders already cite content saturation as a top concern, and 27% of marketers expect creative and content production to be the most AI-disrupted part of their stack in the next two to three years. (State of Programmatic Advertising 2026)

The creative layer is being commoditized. The intelligence layer is where durable differentiation lives.

The pattern keeps showing up and it's not a coincidence. It's what separates organizations that use AI well from organizations that just use AI.

The CX Gap Is Really a Research Gap.

David DiCamillo presented data from a WSJ Intelligence and Code and Theory survey of 800 C-suite leaders that put hard numbers on a problem I see constantly. 94% of executives say customer experience drives business success. 93% admit their digital experience is fundamentally broken. Companies mastering digital CX generate 30% more revenue than competitors stuck on basic personalization, but 88% say AI-driven personalization remains more promise than reality.

The barriers aren't technological. They're organizational: leadership misalignment at 49%, creative talent gaps at 44%, and silos at 43%. Most companies are just automating their existing mistakes faster.

The downstream consequences of that are now showing up in the data. AI-powered customer service resolved issues for 88% of users, but only 22% said the experience made them feel more loyal to the brand. (Gladly 2026 Customer Expectation Report). The efficiency gain is real. The loyalty benefit is not. And 44% of retail executives expect AI to shift consumer choice toward value and fit over brand recognition by 2026. (Deloitte Retail Executive Survey 2026) Brands that have relied on inertia and category defaults are structurally exposed as AI lowers the switching cost and raises the bar for relevance.

This is the upstream argument for audience intelligence as a strategic function. Brands cannot build emotionally intelligent customer experiences without understanding their customers first. Audience intelligence is the foundation layer that most companies skip when they bolt AI onto broken journeys. The gap isn't tools. It's insight.

What This Actually Means for Marketing Leaders

There is a clear through-line across everything from this event and everything I've been seeing in my own work. The marketers who will win aren't the ones adopting AI the fastest. They're the ones building AI into their existing expertise in ways that are intentional, grounded, and tied to real decisions.

Every session at this event landed on the same conclusion. GEO only works if you understand what your audience needs to hear before you structured content for citation. Social intelligence only drives action if someone knows which signal matters and why. AI-powered personalization only delivers if the customer understanding underneath it is real. The technology is the multiplier. The insight is the input.

Most organizations have the multiplier. They're still skipping the input.

The judgment to know what to build, which questions to ask, and when you've seen enough signal to act doesn't come from the tools. It comes from doing the hard, often qualitative work of understanding your audience at a depth no dashboard provides.

That work is harder to automate than anyone wants to admit. And right now, it's the thing that actually separates the organizations moving forward from the ones just moving fast.

AI tools
Generative Engine Optimization
AI Strategy