Richard Palk. seasoned marketer and Little Grey Cells Club report author reviews the AI-driven evolution of Search, and four related implications for brands.
Achieving a Search milestone
In May 2025, at the first annual Web Summit Vancouver event, the high-profile marketer and entrepreneur Neil Patel presented some remarkable data on global Search behaviour, that rapidly spread online.
Perhaps the single most repeated data point since, from his presentation that altered the traditional acronym SEO from “Search Engine Optimisation” to “Search Everywhere Optimisation”, was that OpenAI’s ChatGPT has passed the “1 billion global daily searches” milestone.
This remains an order of magnitude below Google’s behemoth search engine, that processes nearly 14 billion searches daily – although the sheer speed at which ChatGPT hit the 1bn milestone is claimed to be 5.5 times faster than it took Google (Figure 1):

Figure 1. Source: Visual Capitalist
Of course, Google launched in 1998, long before the advent of the mobile web, and of smartphones – so ChatGPT’s rise isn’t comparable apple-to-apple. Moreover, Patel himself has explicitly stated that he doesn’t expect ChatGPT to be a true threat to Google’s established dominance in Search.
Nonetheless, Patel’s presentation is one of many new testaments to a seeming emergent truth, with deep implications for marketers: not only the practice of Search Engine Marketing (“SEM”) but perhaps its very nature is changing, daily, as a result of the mass adoption of Generative AI tools.
We are now firmly in the era of the Answer Engine.
Search vs Answer: the distinction
The concept of “searching” elsewhere than on “search engines” is far from new. According to Patel’s data set, another Alphabet platform, YouTube, processes a further 3.3bn searches daily. Social platforms such as Instagram, Pinterest, Facebook and TikTok aggregate over 10bn. And eCommerce giant Amazon – long-known by marketers as being a key search/research tool on multiple continents for consumers during their purchase journeys – processes 3.5bn by Itself.
A common top-line driver traditionally underlay all the above “searches”: people were seeking links, resources and assets hosted on the Open Web, for a plethora of personal or professional reasons.
The meteoric rise of ChatGPT – as well as other GenAI challengers such as China’s Deepseek, xAI’s Grok, Anthropic’s Claude, Alphabet’s own Gemini, and Microsoft’s CoPilot – however speaks to a deeper common driver.
We don’t just search for clickable links – to visit webpages and other online assets.
We search for information – to answer our questions, and provide solutions to our challenges.
In Figure 2, that visualises web traffic data, not in-platform activity – we see nearly 200m daily average visits to OpenAI platforms, in parallel with their 1bn daily searches:

Figure 2. Source: pymnts.com / SimilarWeb
Overlaying Patel’s data implies 5 “searches”, as a simple average, per visit to OpenAI tools.
Processed by sophisticated software systems, which aim to understand the query, process vast amounts of information, and thereby deliver a specific, accurate answer to the query.
Hence, we should categorise such Natural Language Processing tools as related to, but distinguishable from Search Engines. They are our contemporary Answer Engines.
From Searches to Synthesis…via “Search-like Behaviour”
At a fundamental level, the Large Language Models (“LLM”) powering the likes of ChatGPT and Claude weren’t conceived to always interrogate the Open Web, in order to supply a Natural-Language-formatted response for a user.
Rather, the petabytes of integrated data in their past training and current functioning is their base resource, to rapidly summarise long-form content, explain complex topics in plain language, draft copy, or compare information from multiple sources – or to handle other tasks.
LLM-powered tools don’t just present information in the form of links prioritised by organic and paid ranking authority, and keyword matching. They synthesise their accessible data into conversationally-presented information that matches as closely as possible a user’s input intentions.
In so doing, there are multiple information types for which they are specifically programmed not to execute web searches. A late May 2025 “leak” of a Claude 4 system script revealed for example that this specific LLM should “Never search for queries about timeless info, fundamental concepts, or general knowledge that Claude can answer without searching”.
Instead, Claude 4 decides whether or not to run web searches – and thereby, whether and when to present clickable links to its user – based on dimensions such as requests for the latest version of regularly-updated data, or for insight into highly-topical, non-archived subjects of discussion.
This key distinction with traditional search engines warrants us returning to Neil Patel’s presentation – specifically, to his use of the word “searches” to describe the 1bn daily ChatGPT inputs. In this respect, he differs importantly from OpenAI’s own description – which may or may not have been the source of the data point he shared in Vancouver.
In December 2024, OpenAI tweeted to share the milestone of 1bn user messages.
This is far from mere semantics. Longstanding SEM expert Rand Fishkin’s recent robust analysis, in collaboration with a SEMRush team, claims that “…70% of prompts [on ChatGPT] are for non-search-like functions: creating images, summarizing text, writing code, doing math homework…”
Moreover, Fishkin’s research indicates an average of 8 “messages” linked to each initial prompt. From this, he extrapolates a rather less immediately-troubling competitive data point for Google, of 37.5m daily “searches” on ChatGPT (Figure 3).

Figure 3. Source: Rand Fishkin/Sparktoro & Datos/SEMRush
Conclusion: Four SEM Implications of The Rise of The Answer Engines:
Despite our cautionary tale about a single large and memorable number – 1bn somethings happening daily – search-like-behaviour on ChatGPT is unarguably rising, and fast. There was no daily search-like-behaviour on ChatGPT as recently as 2022!
We also know that Answer Engines on which search-like-behaviour is growing do, even if not always, query the Open Web – Perplexity was even explicitly positioned at launch as an “AI Search Engine”.
Moreover, as was addressed in Little Grey Cells’ April 2025 “Algorithms and Artistry” report with Adobe, traditional search engines are increasingly incorporating AI responses to content entered in their search bars – not only Google with Gemini, but also Bing with ChatGPT itself.
With these inescapable truths in mind, we believe senior marketers with any responsibility for the creation, quality or performance of their brands’ online content need to equip themselves, with an understanding of this rapidly-evolving marketing landscape, and of its associated lexicon.
By way of conclusion, we propose below 4 key implications for senior marketers of this new Search landscape – with Little Grey Cells Club as the ideal community within which to discuss them.
1. Learn a new acronym: E-E-A-T
Google has recently added a new “E” (Experience) to the long-established E-A-T (Expertise, Authoritativeness, Trustworthiness) framework in their Search Quality Rater Guidelines.
Each of these four requirements to support the performance of your marketing content is of even greater relevance in the context of Answer Engines, that seek trustworthy, high-quality content to synthesise into concise Natural Language responses.
2. Learn a new phrase: Zero Click Search
As AI searches frequently result in Answer Engine responses that cite authoritative sources, but that offer no onward links to click, your marketing content needs to reflect how LLMs present responses their users, and how they and their users then interact, based on the responses offered.
3. Learn a new phrase and acronym: Generative Engine Optimisation (“GEO”)
Not only your marketing content, or your website structure and schema need optimising for Answer Engines – your SEM teams and tools will need “optimising” with new skills and knowledge.
4. Look beyond LLMs: Voice Search and Agentic AI both impact Answer Engines
Two further interrelated technologies need careful attention in the context of Answer Engines. The ubiquity of Siri, Alexa and Google Assistant in home and mobile devices mean that searching using voice, rather than screens or keyboards, is already standard for 1 in 5 web users.
However these existing “Agents” evolve, many inputs into Answer Engines, and their responses, may come by voice – with new Agentic AI intermediaries increasingly handling those interactions.