There used to be people you went to. The handyman who'd actually pick up the phone, or maybe just the one neighbor who knew which end of a wrench was which. The urgent care nurse who'd tell you whether the rash needed a doctor. The lawyer friend you'd text when you felt slighted by a contractor. The neighbor whose lawn you envied and who'd actually tell you what he used. The mechanic who would not condescend. The travel agent who knew which side of the island had the better beaches. Some people had one or two of these in their lives. Most people had none. The friction of finding a real expert, getting to one, and getting them to answer plainly was, for most of human history, the friction of having an answer at all.
That friction is gone. All of those people are now one person, and that person lives in your phone.
The expert in your phone has no waiting room, no retainer, no awkward small talk, no judgment about whether the question is dumb. It is conversational, it is patient, it is available at 11pm on a Tuesday, and most importantly, it is the firstthing people ask now. Not the second. Not the fallback when search lets them down. The first.
A couple in Target last month, mid-aisle, debating something on the shelf. The wife pulled out her phone and said, with no preamble, no caveat,
ChatGPT says, and they kept walking. That sentence is the post. The way
ChatGPT says arrived in the conversation with the same casual authority as
my mom says or
the doctor says. The way it didn't need to be defended. The way it was, simply, where the answer came from now.
Google took something like four years to become a verb, eight to become a dictionary entry. ChatGPT did something stranger and faster. It did not become a verb. It became a contact.
The third space the metaverse spent tens of billions trying to manufacture, the ambient social presence that Meta could not summon out of headsets and avatars, arrived instead as a name people drop in conversation with the assumption that everyone in the conversation knows who is meant.
The wife in Target said ChatGPT says the way she'd say my sister says or Dave from down the street says. No introduction. No qualifier. Her partner knew exactly which contact she meant, because everyone has the same contact now. Three and a half years from launch to that. Google needed twice as long to become a verb, and the verb is the lesser thing.
Google, Bing, DuckDuckGo, Brave, the entire constellation of traditional search, are not gone. They are downstream now. They are where you go to take action on what the expert in your phone already told you. To buy the thing. To find the address. To verify a claim. To book the appointment. The discovery happened upstream, in a conversation no analytics tool can see.
That last sentence is the rest of the post.
A note before we go further. The data on this is not settled. I am stitching findings from different studies across different populations, and I am doing it because I am sitting here reading the published research and watching my wife open ChatGPT before she opens anything else, hearing a couple in Target say
ChatGPT says in the same casual register they would say
my mom says, getting
Perplexity tips from a 70-year-old pilot client.
The published numbers and the lived experience are not the same picture, and I cannot pretend they are. So I am working through the contradictions instead of around them.
Adobe says one thing. Digiday says another. Menlo's 91% applies to AI users. G2's 52% applies to B2B software buyers. None of these is the whole population. None of them, alone, supports a headline about how everyone is doing this now. But the directional shift is real, and the contradictions exist because the measurement was built for a web that no longer describes the behavior.
What follows is what I am seeing, what the available evidence suggests, and what the mechanism predicts. If you read it and think the data is thinner than the claim, you are reading carefully. The claim is directional. The directional shift is the story.
Parametric consultation does not generate clicks
About 60% of ChatGPT queries are answered from
parametric knowledge, per
Digital Bloom's 2025 AI Visibility Report. The model answers from training weights. Nothing is fetched. No URL is loaded. No referral is generated. No impression is recorded. The answer is given, the decision is made, the cart is filled, the rash is or is not taken seriously, and not one analytics event has fired anywhere on the planet.
This is not a measurement lag. It is a measurement category that does not exist.
The web analytics stack, every dashboard, every funnel, every attribution model anyone is staring at right now, was built on the assumption that decision-influence requires a click. The click is the event. The event is the measurement. No click, no event, no trace. The assumption held for two decades. It does not hold now. It has not held for some time, actually, but the analytics layer is the last to know, the way the analytics layer is always the last to know, because what it measures is what it has been built to measure, and what is happening now is not that.
Cloudflare published a number in June 2025 that I have not been able to put down. Anthropic's Claude was making nearly 71,000 HTML page requests for every HTML page referral it sent. The platforms read the web at industrial scale and almost never send traffic back. It looks wrong. It is not.
71,000 : 1
HTML page requests to HTML page referrals — Anthropic's Claude, June 2025
The analytics blind spot rendered as math.
The traffic that does arrive gets misread.
Loamly looked at 446,405 visits and found 70.6% of AI-driven traffic lands as Direct in GA4, because the platforms strip referrer headers. The marketer staring at the dashboard isn't just missing
AI traffic. The marketer is
counting it as something else. The dashboard says Direct. The visitor says ChatGPT. Both are sincere. Only one is right.
The substitution is showing up in the search data, not the AI data
Adobe says AI referral traffic grew tenfold between July 2024 and February 2025.
Digiday says AI platforms still drive about 1% of total web traffic. Both numbers are right. They are also useless for understanding what is actually happening in the aisles of Target, because the substitution is not showing up where the analytics are pointed.
It is showing up next door, in the search data.
Gemini ended 2025 at 548% growth, 1.73 billion monthly visits by December, per
Similarweb. Perplexity grew 80%. Claude, 125%. Meta AI, 169%. ChatGPT's web traffic plateaued in late 2025, but its app was the most downloaded of the year, 1.9 billion downloads, and weekly active users went from under 400 million in January to nearly 800 million by August, per
Epoch AI. The platforms are absorbing user time across every surface, and the web-only metric makes the substitution look smaller than it is, because the substitution is also happening inside apps, where Similarweb cannot see it.
The Google side of the curve is the half I keep returning to.
Datos and SparkToro's Q4 2025 clickstream data found Google searches per US desktop user fell nearly 20% year over year. Twenty percent. Users are not abandoning Google. They are running fewer follow-up searches. The expert in their phone did the synthesis upfront, and the second and third and fourth queries, the ones that used to constitute the actual research, never happen. The substitution is not showing up as a Google exodus. It is showing up as a Google attrition, query by query, the way most attritions show up, until one quarter the line bends and someone notices.
Twenty percent.
Year-over-year drop in Google searches per US desktop user — Datos/SparkToro Q4 2025 clickstream data
Users are not abandoning Google. They are running fewer follow-up searches.
Inside ChatGPT, the same thing is happening from the other direction.
Semrush's clickstream analysis found the share of ChatGPT prompts using traditional search-style language nearly doubled between October 2025 and February 2026, from 18.9% to 34.9%. People are not learning to chat. They are using the chatbot as search.
A useful complication, for honesty's sake.
Siege Media's 2.3 million-session study found ChatGPT has volume but shallow engagement, 320-second sessions, 58.5% engagement rate. Claude has the deepest engagement, 396 seconds, 2.73 pages per session, on a smaller user base. Perplexity matches Google almost exactly: 270 seconds versus 273, 58.3% engagement versus 56.8%. This is not one platform winning. It is multiple platforms each absorbing different slices of the same user behavior, while Google loses per-user query volume to all of them at once, which is a harder pattern to see in any single chart and a much harder pattern to fight.
Discovery moved up. Search moved down to confirmation.
The buyer journey did not collapse AI and search into one channel. It split them.
G2's 2025 Buyer Behavior Report found 52% of B2B software buyers now start their research with AI. In 2024 that number was 18%. But 84% of those buyers still hit the vendor's website or read third-party reviews before deciding. The
Foundry/IDG 2026 AI Priorities Study, looking at IT decision-makers, found 99% use AI somewhere in the buying process and stated, in language that should be read slowly, that traditional sources have become a confirmation step rather than a discovery mechanism.
This is why the conversion numbers on AI traffic look the way they do. The explanation is upstream: the expert in the phone did the qualifying before the visitor showed up. The lead is warm because the warming happened in a conversation no one logged.
AI traffic is not just better than average. It is better by a margin that demands explanation.
| Source | AI-Referred Traffic | Traditional Traffic |
|---|
| Microsoft Clarity (1,200 sites) | 3x conversion lift | Baseline |
| Similarweb | 11.4% (AI-referred) | 9.3% paid / 5.3% organic |
| Seer Interactive (B2B) | 15.9% (ChatGPT) | 1.76% (Google organic) |
Microsoft Clarity (1,200 sites)
AI-Referred Traffic
3x conversion lift
Traditional Traffic
Baseline
Similarweb
AI-Referred Traffic
11.4% (AI-referred)
Traditional Traffic
9.3% paid / 5.3% organic
Seer Interactive (B2B)
AI-Referred Traffic
15.9% (ChatGPT)
Traditional Traffic
1.76% (Google organic)
The flip side is the part most marketers are not tracking, because most marketers cannot. If you are not surfacing in the AI conversation, you are being cut from the consideration set before any search happens. There is no impression to flag the miss. No referral from the conversation that didn't include you. No click-through rate dropping to signal a problem. You see what landed on your site. You do not see what got filtered out before it could.
Derek Buntin posted about this on LinkedIn last October. A new lead came through his website. He asked how they found him. They said ChatGPT. The referring domain in his analytics said direct or
organic search. The AI gave the recommendation. Google took the credit. The dashboard told a story that wasn't true.
The friction is the only thing keeping AI deliberate
Right now, asking the expert in your phone requires a small deliberate move. Open the app. Open the website. Type the question. The friction is small but it exists, and it is the only thing keeping AI consultation in the foreground of awareness instead of the background.
It will not last.
Apple Intelligence has been a flop. Siri has been a punchline for a decade. None of which matters for the structural argument, because the structural argument does not require Apple specifically. If Siri becomes genuinely conversational, and there has been chatter about Gemini powering parts of it, or if Google Assistant absorbs Gemini more deeply, or if any one of a half-dozen other ambient surfaces lands well enough to stick, the AI answer stops being something you go fetch and starts being something already there. The consultation becomes the same gesture as checking the time.
When that happens, the behavior the Target couple is exhibiting as a slightly novel reflex becomes the default. The blind spot does not get incrementally larger. It becomes the surface. Not predicting that ships next quarter. But the gap between open an app and ask your phone is one product release away from collapsing, and the curve does not particularly care which company gets there first.
Build for the layer your dashboard cannot see
The parametric layer is doing more work than any dashboard can show. Strategy that addresses only retrieval is solving less than half the problem.
The play is dual-layer. Retrieval optimization handles what happens when the expert in the phone looks something up: structured content, semantic density,
citation-ready passages,
technical crawlability. Real work, worth doing. But parametric presence handles what the expert already believes before looking anything up:
Wikipedia accuracy,
entity signals in widely-cited publications, representation in the training data that shapes the model's priors. A brand that is absent from the parametric layer is absent from the 60% of queries that never trigger retrieval at all, and absent invisibly, which is the worst kind of absence to be. The
Retrieval Layer Intelligence Report goes deeper on the mechanics if the dual-layer framing is new.
The measurement question is the harder one and I do not have a clean answer. A branded search spike that correlates with no campaign, no press hit, no social mention, is probably AI-driven discovery showing up as the only event the analytics layer can record. Some practitioners are already correlating Search Console branded search trends with AI platform activity to estimate the dark funnel. Directionally right. Structurally insufficient. None of the
current AEO monitoring tools close this gap cleanly, because the gap is not a tooling problem. Decision-influence has moved upstream of the measurement layer, and the measurement layer has not caught up, and pretending otherwise is the form of self-deception that lets a quarter go by before anyone asks the question that should have been asked at the start.
The industry data is correct. AI search is a small fraction of total web traffic. Lived experience is also correct. AI is the first stop. Both are true. They are not in tension. They are describing different parts of an animal the dashboard can only see one end of, and the end the dashboard cannot see is, increasingly, the end that matters.
The marketers who build for both layers, retrieval and parametric, while the measurement catches up, are the ones who will have presence where the expert in the phone is making recommendations. The ones waiting for the dashboard to confirm the shift will be reading the data correctly and missing the behavior entirely. Which, if you have been watching, is what they have already been doing for some time.