# GEO Is Not SEO: The New Rules for Getting Recommended by AI

## Метаданные

- **Канал:** The Grow and Convert Marketing Show
- **YouTube:** https://www.youtube.com/watch?v=DTOSjwfkkBk
- **Дата:** 01.04.2026
- **Длительность:** 44:41
- **Просмотры:** 195

## Описание

Most people optimizing for AI search are focused on the wrong metric.

Getting cited by an AI model feels like a win but a citation doesn't mean you're actually influencing the answer. 

And after sampling AI search prompts over 20,000 times, the data makes that painfully clear.

In this episode, Devesh and Benji from Grow and Convert sit down with Bernard from Clearscope to break down what's really driving AI recommendations and what isn't. 

They get into why Gemini searches an average of 5.7 times per query while ChatGPT barely searches at all, why Google appears to be actively suppressing brand citations even when it mentions you, and why the GEO "hacks" everyone's pushing right now (schema, Reddit, you name it) aren't moving the needle.

The bigger takeaway: training data controls 70–80% of what AI recommends. Web search only affects 20–30%. Which means the content strategy that actually works in this new era isn't about producing more it's about going way more specific.

In this episode:

- Why getting cited by AI ≠ influencing the answer
- Gemini vs. ChatGPT: how differently they search (and why it matters for your strategy)
- The brand suppression finding: why mentioning yourself might be hurting your citations
- Why "Content 2.0" is about persona-specific, long-tail content not volume
- How Grow and Convert is evolving Pain Point SEO for the AI search era

Relevant articles:
Invisible prompts: https://www.growandconvert.com/ai/invisible-prompts/
Topic based GEO: https://www.growandconvert.com/ai/topic-based-geo/

Timestamps:
00:00 — The brand citation suppression finding
05:00 — AI temperature explained
09:00 — Competitive vs. niche categories
12:00 — Content 1.0 vs. Content 2.0: from keyword intent to persona matching
17:00 — The data: Gemini searches 100% of the time, ChatGPT only 40–70%
24:00 — Rand Fishkin's brand variance study and what it actually means
28:00 — Why GEO hacks (schema, Reddit) are a distraction
33:00 — Gemini averages 5.7 searches per query. ChatGPT averages 0.77
38:00 — How to actually influence training data
41:00 — Why Gemini is going to win — and the future of content strategy

## Содержание

### [0:00](https://www.youtube.com/watch?v=DTOSjwfkkBk) The brand citation suppression finding

Gemini 3 has an explicit flag in their system where if a brand is being mentioned in a commercial prompt, it does not site that website. — Wait, wait, wait. Hang on. You're saying if you mention yourself in the post, you can't both be cited and mentioned in the overview. — Look, look at this. They are using like complete and utter like crap that has nothing to do with what you would normally respect as SEO. — Content 1. 0 was about targeting keywords and the intent around a keyword and then content in the 2. 0 is going to be about figuring out how to write content that best matches to that persona. And so how do we influence that personalized search? Yeah, my understanding from the Open AI perspective is that it's black boxy that OpenAI doesn't reveal what goes in the training data, how often they update it. Is that true? — I think Gemini is just going to win. I think GPT is going to like go to like obscure irrelevance. — Why? — It's because they get search when you search as a model, your answers are like anywhere between 10 to like 200% better. So what we found after running you know like tens of thousands of prompts was that Gemini 3 has an explicit flag in their system where if a brand is being mentioned in a commercial prompt, it does not site that website. So if you look at like best SEO tools like yes you might occasionally get HF's like blog like own blog post like mentioned because Google does still perform the search and it would be like captured in that like mechanism but they're explicitly ignoring basically like if we were to write best you know SEO tools at clear scope then we cannot get a citation because clear scope's being mentioned as like one of the top like tools within SEO which then kind of defeats the purpose of the content. — You're saying if you mention yourself in the post, you can't both be cited and mentioned in the overview. — I am saying if Google is citing or if Google is mentioning your brand like Samrush, Href, Surf or Clear Scope, Phrase, whatever, then you cannot qualify to be a source for that like uh for that like key. — Oh, I see. So you're saying here best SEO tools the AI overview mentions Samrush HF's SEO rank — and then if you look on the right you have US — they will never be a source here — it's not never but it's very dramatically suppressed currently — that's interesting because when we look at like best content marketing agencies we have we were like for SAS we're like one of the top sources and we're mentioned — really so — maybe it's just an like a software Yeah, maybe. Unless we're not mentioned today. Oh, there we are. Yeah. — Are you mentioned or are you cited? Oh, you are. Okay. — Yeah, we're actually seeing that a decent amount actually. — Interesting. — I think it's a function of how you write a lot of that content. — Yeah. Like there there's the Lily Ray like blog post and tweets about self-promotional listicles are like hurting your SEO or hurting your ability to show up for that kind of stuff or they're just not ranking. We're not seeing that at all across our clients. Like we have a number of these listical type blog posts where the client is both ranking or like has a citation and being shown as one of the recommended brands. And I think it's just a function of how the post is written. Like I think a lot of the posts that maybe she was talking about were like clearly AI, clearly self-promotional, not identifying themselves as the source of information. Like in the way that we write all of them, we're explicit in saying here's why we're putting ourselves as number one or here's why we're ranking things this way. And I don't know, maybe that helps. But yeah, we have like a number of different clients where like one of our content pieces is a source of information and it's also being pulled as a brand mention. — Yeah, I'm not saying it's impossible, but the other like here I'll just show you the other really deep like experiment that we were diving into like you guys know like temperature and like all that stuff and the whole like Google like ecosystem, right? Like temperature and stuff — like it defaults to one. All right. So I mean temperature in AI world just refers to how much creative freedom you give to

### [5:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=300s) AI temperature explained

the model. So essentially as you can see here when I hover over it um Gemini 3 best results at default 1. 0. So 1. 0 is basically what most models like by default use GPT cla so on and so forth. One gives them a good amount of freedom, but it still provides guard rails that you know two like two is just like you know mix make stuff up. But zero is you cannot deviate too much from the training data. And so essentially then we say okay well zero is basically — oh your way to figure out what the training data defaults to. — Aha yes. So if I do this best content marketing agencies right and I run this with Google like grounding with Google search what you will see is that you get recommended but you don't show up as a site citation or source and this is across the board right this is best SEO tools right Samrush Href SE ranking screaming frog surfer clear skip whatever but look at this they are using like complete and utter like crap that has nothing to do with what you would normally, you know, respect as like an SEO. Same thing happens here. Well, I guess 10 speed. I don't know if 10speed actually shows up here. — Nice. We're in there. Are we the complete and utter crap? No. We're actually mentioning. — Yeah. Yeah. You're here. But you can see here, right? Like basically nothing. — Yeah. Those are interesting sites to be pulling from. — Yeah. They're like oftentimes actually like absurdly bad. Well, I don't know about the agency one, but when you look at like this crap like — you've gone in and evaluated the content itself and it just looks bad. — Yeah. It's like what is this? This is nothing. — That's Yeah, that's actually Yeah. — And this is training data. This is as close to training data as you're going to get. — Well, the output is training data, but it's still grounding in that. So, it's mixing. — Yeah. Yeah. So basically what ends up happening is that it's not run at zero, it's run at one. And so when it's run at one, there's a bit more creative freedom like uh freedom, but I don't have the direct. Basically, we found that like at ended up getting cited like 40% of the time because Google searched and after they searched, they like determined that like that piece of content was like good enough and then they use that like as a source. But by default the answers are being constructed without those like criteria and then Google will search at temperature one and then from there you then get to possibly v for it. But there's definitely a suppression on what is happening in terms of brand mention to source. And I think this is what Google's way to say like, okay, if I mention you, I shouldn't look at the content you're producing because it's going to be biased. And that's like why self-promotional stuff has basically fallen off a cliff. Obviously, if you still do it tastefully and do it well, you're still going to rank in Google search and then you're still going to like potentially win a citation or a source, but by default like that's been erased from the current like system. — Yeah. Again, like but we're not seeing that like we do that for our clients and it's still working. I mean, we I have to look carefully. Well, here's like a theory that I have is that when you're doing marketing in really crowded or competitive categories that it is much harder to influence the search than it is if there's not a lot of information on that topic area. So like the example that I have is we started working with this trucking management software and before we

### [9:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=540s) Competitive vs. niche categories

started working with them, there's just not a lot of good content in that category because there's not a lot of software options. information on like what this category does. And so all of the content that we're producing for them is pretty novel for that entire category and it's helpful and educational and they're getting cited like crazy and they're also showing up as the answer just because there's just not a lot of good information on that topic. But then if you go into a space like marketing like where you're trying to influence content marketing agencies or SEO tools, I think what's happening is they're taking the brands much more like the top brands much more into consideration than certain list posts or like certain pieces of content. So like they already have a sense of for SEO tools, what are some of those top brands? And so it's much harder to influence a search for like best SEO tools or best like uh exercise bikes like Pelaton is just going to be that known brand and come up over and over again and some new incumbent in the space unless you get a ton of like people talking about you on multiple blogs across social then that could potentially change like the outputs. But I think it's much harder to influence for those categories that are really established. — I don't know what you think about that. — Yeah, there's probably something there. Um like as in because there's no content there like the model is essentially like forced into just accepting that um content being produced is like they have to use it essentially. I mean I can see that. But I also think for like a lot of the searches we're seeing the searching the web take place. It's very rare unless it's like more of a top offunnel query or midfunnel query that you wouldn't see that search take place. And so I think there's two things happening on top of the grounding data. There's the search happening in the background to try to find more context or just like check its answers. And then there's also personalization happening. And I think the personalization happening is probably where a lot of the opportunity lies for SEO or GEO because if we just think to the future of search like Google was based around keywords. Now with LLMs, you have much more identifying information. And if the goal of the LLM is to try to provide the best answer to the user, it's going to use as much data about that person as it has to provide you a great answer. And so I think the opportunity lies in content in the SEO world or like content 1. 0 was about targeting keywords and the intent around a keyword and targeting

### [12:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=720s) Content 1.0 vs. Content 2.0: from keyword intent to persona matching

and then content in the 2. 0 know is going to be about figuring out how to write content that best matches to that persona. And so h how do we influence that personalized search? And I think the answer comes to less around these highlevel topics and more around specifics. And so for example, if grow and converts best clients are SAS companies, the more that we can show that our content marketing strategy works for SAS companies in terms of the content that we're writing, the case studies that we're producing that map to those personas. Like basically every piece of content we make as specific as possible, provide real world examples, talk about the personas who would use it. I think that's going to influence the search a lot more down the road because if someone's saying, "Hey, I'm in a mid-market SAS company and this is my role. I'm looking for an agency that can help me grow leads. " It's saying, "Huh, grow and convert positioning is all around driving lead genen through content. " And they have a bunch of case studies on their site that are also talking about serving that audience. I'm gonna recommend this company because it best matches that like persona of the person searching. — I'm on board with all of that. I think it's maybe just like the role of like informational content. So what you're talking about is what I've like kind of seen happen is just like at the end of the day if clear gets cited for like how to do SEO like I don't know what's the marginal value of that. I mean obviously I could like hypothetically and philosophically be like okay well if we get cited then you know the models must think you know clear scope of topical authority in whatever internal linking or whatever and that's probably a good thing but you know I'm not really getting traffic from it. instead, you know, what I see our uh what like we get cited for is like our pricing page or you know our feature like page and you know that maps to like what you're talking about where it's like — it's cuz they already know you as like an SEO tool. So, for example, if you run that search and they're citing your pricing page and some like information about your product, they're just trying to serve the best answer because they want to pull your current pricing and the current positioning of the product to display in the answer. But I I think if we're taking it then like a level further, I think there's still opportunity to show up for other bottom of the funnel searches and influence those answers by being way more specific in what you're targeting. So if we think about like just our framework around like painpoint SEO, I still think it's valid, but I think the difference is that the content that you used to write used to be for like best SEO tools, whereas now I think you need to be best SEO tools to do this specific task or to like update content to produce new thing like to actually have to be tied to specific use cases or around certain personas. And I think that's kind of the gap where I don't see companies thinking about yet to where I think if you're way more specific in writing those pieces. If again if there's not enough information about okay within this SEO tool set which are the best for these specific use cases then I think you can influence those answers a lot more because if people are running these really specific searches like we just wrote a post that showed one of our clients now when she first was like in an LLM quering about what content marketing agency it was like paragraphs of information like I'm doing this I've tried this before. I'm looking for an agency to do this specific task and then it recommended us as number one because we had case studies that closely tied to what she was looking for. And so I think that's kind of like what I'm thinking towards the future is it's the specificity that's really going to matter. Everyone has written this really highlevel informational content like the how like your example the how to do SEO but not many people have then gone to like even more longtail stuff and I think the longtail stuff is going to be where the opportunity lies for this next iteration of search. — Yeah, I'm 100% with you there. So then let me just show you what where we're thinking about at Clear Scope. I mean it is probably very relevant to what y'all are doing as well. So a couple of things again we have we've been doing a lot of internal testing just the whole industry is moving crazyly. Um — so um all right I'll just start here with like the a lot of like the data sampling that we've done. This is primarily we were analyzing Gemini and chat JPT obviously the top two models.

### [17:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=1020s) The data: Gemini searches 100% of the time, ChatGPT only 40–70%

So number one, what's interesting is that basically of all of the prompts that we run, Gemini searches 100% of the time and chat GPT only searches about 40. This is somewhere between 50 to 70% of the time after more data has been like analyzed. So basically chat GPT only searches like half of the time to 70% of the time. And then when this is the more interesting part is that when Gemini searches it hella searches whereas when chat GPT searches it usually just does a head term like sort of search is what we found. — What do you mean? So like it's breaking down that search into like more like the query fan out or like multiple topics or it's searching for like a wide variety of — what are the best SEO tools sampled against Gemini 3 a 100 times and then basically what we found is that basically it's all probability right that's pro that's the whole idea is the probabilistic nature of um LLMs but we found that within the probabilistic outcome It's actually very boxed in as in once you sample it like a hundred to a thousand times it normalizes around it and it doesn't change that frequently. It's basically the large language model is constrained to create an answer within a certain box of possibilities and when you sample it enough you get the long tail of possibilities all mapped out by frequency. Right? So this is a 100 times run. What are the best SEO tools at temperature 1 against chat GPT and Gemini? And you see here that 49% of the time it searches best free SEO tools 2026. 27% of the time it's this. Basically what you're referring to is what we'd call the long tail. Just the very fact that — you know like um models will do these really crazy things and these are affected by user context, right? It will be like okay you're an enterprise or you know you want whatever category or you care about local or whatever. Anyways these are the long tail of what you would then want to basically topic cluster and then create use concrete use cases and stuff around because these are going to be um the stuff that people care about. — Okay, this is super interesting then. So it does support this hypothesis that this is going to — I'm completely 100% with you here. So uh Gemini searches a lot and it searches all the time. Chat GPT does not search a lot and when it searches one or two times. This is confirmed over here on the right. Also, when I'm talking about the citation thing, right, I'm looking at like again, we're sampling, okay, we're sampling a ton of stuff. And like when I look at like hress. com, — it doesn't exist in a 100 samplings of citations. — Like this is okay. Seven person, right? Like it exists except it's it should exist way more like but it doesn't. And that and that's what's leading to your theory of saying you think suppressing it because it's cited to sort of avoid a sense of bias or whatever. — Yes. Samrush frequency mentioned 100% samrush. com suppressed. — Yeah. I wonder if it's just like I Yeah, I don't need to read that because everyone has this. Another one is like another this I don't have data backing this up. this just a theory of mine is that I wonder if it just is like I already know about Semrush and ah refs from all of my training data/past searches so I don't need to be searching their site I already know this — but the thing is when you think about why it's searching the reasoning as to why the model searches is because a it wants to make sure that the answer it's about to give to the end user is factual it's up to date it's In those cases, it should absolutely be like there was a time where — chat GPT would always do site seamrush. com official site pricing features like whatever, right? It's like I need to know exactly what this site is saying because I'm about to recommend it. So, it's actually disappeared like that search format has like disappeared from this like um analysis bit. But um yeah, so we saw that — are we sure that this is exactly how it works though? There's nothing added on top of this because again like if we look at let's say citation track citation data in tracker or just when we're looking in a SER we often do see the company's content as one of the cited sources. So I'm just I'm curious then — I mean you're clearly being suppressed. We'll just put it like that. Like I think there might just be like you know more like you know like why would digital elevator series X marketing I know this person like Joe Robersonson but why would these be like above you like should it except for the fact that when we look at this you know you're here or there you know siege media like if we look at siege media like 24 right like you're clearly being suppressed like it's there's some sort of suppression that's happening here like — yeah David Davish what's the the study that we just recently did what is the piece around the brand side versus the prompt that we were talking about the other day — the one we haven't published yet with Caitlyn — yeah because I wonder if that kind of like gives some hints here — that study is just chat GPT and we were looking at the overlap of what's ranking for the two fanout queries it's like two to four that it shows up in the console what's ranking there versus the citation list. And I think we found like there's a 25% of the things that are ranking for the fan out query are actually in the citation list. So then the answer is like what about the other 75%. The question becomes that means like the ones that are ranking only 25% of them are being cited. Actually maybe it's best to say it the other way. 25% of the things that are cited are ranking for the fanout query. So where's it where is it getting the other 75%. — Yeah. I mean, I think that's it's coming from training data and training data currently is very weird and opaque. — Yeah. So, we have in the piece it's like going to be published next week, in fact. — But wasn't there something about like the brands that are showing though? — No, that's a different study. That's RANs and Rand just did like he just asked had people ask a bunch of times and there's this huge variance in the brands that it actually recommends. — Yeah. Well, that's why we like sampled it like a basically a hundred times or tens of thousands of times to like understand that. And then what we found

### [24:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=1440s) Rand Fishkin's brand variance study and what it actually means

was that there is variance but over a large data set it's all normalized to probability like it just it all boils back down to like I mean this is all web searches but you know you can imagine if we looked at it from a brand perspective it's all different because that's the design but the frequencies over a large period of time become normalized and then that's like what you — I think that's what Rand concludes is that like at first he's like look if you ask for the dental CRM or whatever, you know, there's like 87 different CRM that are mentioned. So, like what are we even doing here, right? How do you optimize for this? But then later in the piece, he says, "But the top three are mentioned often, like in every time you ask, the top three are very likely to be mentioned over and over again. So, if you become one of those like well-known brands, then it's bound to mention you over Yeah. You're saying siege here is mentioned every single time. — Yeah. Almost every list which is exactly what we see. So then the question becomes how do you affect frequency? — Yeah. frequency is here? Like this is our guess. This is our guess, right? Is your guess too. — This is the long tale. It actually makes sense then because like so the answer is then you have to produce a bunch of really valuable content that people are sharing and then also that people are writing about you as one of the top brands or like the influential brands in your category. Because if you couldn't affect that frequency rate over time, then those top brands would just always stay there and there would never be a new product or a new service offering that competes in that category. and so marketing has to be able to affect the frequency that you're shown. — Yeah. So, anyways, lots of interesting stuff. — This is super interesting because we've been diving into this too, but this is even more interesting than the studies that we've been doing. — Yeah. I mean, we're trying to figure out what's the product and you know, obviously um you could benefit from this information too. So, Gemini always searches when it searches a lot. Chat GBT doesn't search that much. Um yeah this is what this is very interesting. So we ran this over time right over like seven days and basically what we found is that an initial sampling of like basically a hundred basically from anformational query perspective captured 100% of searches that were run throughout the week. So you know what is SEO basically performed no additional searches outside the control over a week. So it was basically a stable right like what is SEO is always going to be what is SEO and therefore the range of searches never changes. A researchbased intent would be like you know how to do this. A fresh intent would be like what are the you know latest SEO trends or you know price of Bitcoin or FOMC meeting or whatever and then a commercial would be best SEO tools. So then we saw of course fresh should search more research searches a little bit less in terms of deviation from the control set and then commercial actually doesn't search that much either. — So th that graph on the right it means the gray means it's running those queries. The gray meant that within the first hundred queries that we've already captured all of those searches that was like that the model planned on running and over the next seven days you know it came out with 17 to 23 in fresh and research but only uh — 17 to 23 new different — new search new web searches. Yes, that's correct. And a new web search would be something that could be in the like the 1% right like this. And so yeah, it's like even if it got a new search, the frequency of that search might be still really low. — So this brings me to then like what about all these like GEO hacks around

### [28:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=1680s) Why GEO hacks (schema, Reddit) are a distraction

like posting on Reddit, the schema? To me, none of this stuff makes a lot of sense. — No, it doesn't. I mean, you know how it goes. There's always charlatans and gurus and maybe for pockets of time that works or whatever, right? — For sure. But it's not a long-term strategy. And like I find what's interesting is like the metric of success that everyone has around GEO right now is does it show up as a citation? Okay, great. But a citation doesn't influence an answer. Like those are two different things. And so I feel like a lot of the case studies that people are showing, they're like, I produced this content and now it's being brought in as a citation source. Okay, — the citation source for what prompt first of all? — Because it can be used as a citation, but like what is the actual prompt that's being run that? So like no one's really talking about that. And then the other thing is like if we're looking at those sources of information that it's pulling from like I'm not seeing too much social like or like Reddit or these review sites. I can see maybe for that like when you ran best content marketing agencies the second thing was reviews. So yeah maybe look is looking at some review sites or trying to pull or maybe it was for the tool — for the best SEO tools. — Yeah, It was reviews. So then some of those review sites have influence there. — Yeah, totally. Exactly. Totally. — But it's so interesting because like everyone's mind I feel like in the last few months has gone to these like hacks like you need schema on your site and you know all this stuff and you're like it never made any txt. — sense because you're like if the LLM's already crawling the web and organizing information without schema, why does it need schema or any of this stuff? Yeah, I whatever. You know, people are stupid and it's a new territory and there's a lot of FUD. So, — um, back to that gray and yellow graph. — Yeah, this is really interesting. — Even the gray, it means that it's still like when a new user asks, it's still going to run those queries. They're just the same queries it was running in the previous seven days. Is that right? — So, yeah. The gray means that they will still run these queries. they were just already — so if the SER changes for those queries, it can affect what is influencing it. — Correct. So there's only two reasons why you would go outside of the band the initial band that we can like — the initial band of different control set. Yeah. The initial control. There's three core reasons why this would occur. Number one is the training data or the model changes, right? you know, basically Google trains a new data set and I think the their recent data set is still like, you know, till January of last year. Yeah, January of 2025, right? So, they're already like a year and like a few months behind. So, if that changes, then yes, you're going to see a large fluctuation in the same way that a core update would have a large fluctuation in SEO. Um, so training data changes or model changes, right? Gemini goes from 3. 1 Pro to 3. 2 to Pro or Flash or whatever and you're going to see some fluctuation there. That's like bucket number one. Bucket number two is that because we know right that all these like models are searching, it's the search changes, right? So if a core update happens or if rankings fluctuate for whatever reason, then yes, you're going to start to see some deviation in terms of you know what gets recommended and so on and so forth. And then the third reason is because the it's the LLM itself, right? Like this the fresh the LLM itself decides that it needs more searching for this particular thing which is why you start to see different shapes, right? Like this is a commercial prompt. What are the best SEO tools? Right? And commercial doesn't actually a search that much and b deviate that much. And so you kind of see this more like you know head term to like you know whatever like um curve whereas you know when you look at you know something like I'm trying to see if I have like a what is yeah what is a credit score it looks like this right it's a lot flatter right and so the LLM will reason about what it needs to search for and if you just happen to be in a category where it's like oh yeah I need to do a lot more searches like fresh in research, then it will, you know, like evolve over time. — Those blue histogram graphs are you counting the different queries that it searches across a bunch of different times where you're a hundred times that you're asking it, right? — That's right. Yeah. And we did that across, you know, many like many different problems. — In a single user's query, how many times does Gemini seem to search? — In a single user's query, on average, it's 5. 7. 5 point it searches 5. 7

### [33:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=1980s) Gemini averages 5.7 searches per query. ChatGPT averages 0.77

different queries. — Yes, that's correct. — Whereas chatgbt is like two — 0. 7 we have it here. So chat GBT on average. Yeah. So now this is the more robust data set. It searched 77. 3% of the time. When it searched it only searched 0. 77 whereas the average amount of web searches for Gemini is 5. 7. — And that's per response across. And this is averaged across a bunch of different like types of — like 20,000. Yeah. Acrossformational, fresh, whatever, — commercial, I want I wonder if you did that same study per category like type of key, — right? Like you mean software versus news or politics or something? — For sure. Or just like even commercial research fresh and how that changes. — Yeah, there's like many more. So yeah like this 20 to 29% it doesn't search the web it searches differently so these emerge but chat GPT uses the same two to four zero evolution obviously there's no evolution for Gemini likeformational uh commercial intent is actually really stable that's what we found fresh intent is the most volatile that makes a lot of sense um this is probably the most interesting facet that I think just from a pure like philosophical standpoint is that the control run covers 72 to 81% of like the citations that are likely to be used and the control run covers 86 to 100% of the daily brand mentions that are likely to like occur. You can see — define control run again. The control run is like the initial 100 response sample and then we sample it a h 100 times you know daily throughout the next like two weeks and then it's like okay the control run basically covered like the control runs across both basically cover the majority of what the model is likely to recommend and site. So that kind of goes back to then Rand's study about like the six to eight brands will be mentioned the most because they're likely part of that control run — because they're part of the training data. Yeah, exactly. It's basically the training data like guides all and then the search like this is basically saying that the search component only influences the model like about 20 to 30%. That's like what we're seeing, right? Or else it's all just like training data. Like if you asked it without any search, any whatever, it's just going to always respond with within a band of like responses and then the search affects it by like 20 to 30%. — Is for Gemini and Google products or chat GPT or both? — This is because we've we were we like basically stopped doing like chat GPT because we're just like well chat GPT doesn't really search much. And so the conclusion with chat GPT is twofold. It's either really easy to influence it because it when it searches it only searches like one to two times like as you can see here and so then you just need to win the head ranking and then you influence it. Or you can make the counter argument to say okay because it only searches once there's actually way less surface area for you to affect what it's going to like you know respond with. I was asking in a different angle. I think our work with clients and tracking sort of them across dozens of clients in our pieces through tracker suggests to me that chatbt is using the results of those searches. It's waiting at less versus its training data than Google products. I think — Google when we rank clients with our list post that everyone says is dying but seems to work extremely well for us and our clients. When we rank them, we get coverage and visibility in AIO and perplexity way faster than Chat GPT, which suggests to us that chat GPT is just using search less compared to its training data. it'll search and your study is suggesting it does a smaller number of different searches but then how much is it waiting that and when it's mixing with it training data our data seems to suggest it's waiting it way less — yeah I would say if I were chat GPT and I'm only doing one search of course I'm going to weight that way less than if I'm Gemini and I'm doing like six to 10 searches — yeah and like this is a bit of a philosophical statement but it also makes sense to me with the DNA and history of those companies. Google fundamentally was a search company, so of course they're going to rely on that more. Whereas ChateBT, I could see why the DNA of the company would be like, we'll search, but we don't really need it. What we built is this really smart model and so we're going to rely on our training data more, — right? Yeah. Like for this one, it

### [38:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=2280s) How to actually influence training data

didn't search, right? That's why there's no web search frequency over here. — Um, which means — Yeah. — Go ahead. I was going to say if it doesn't search, you can't influence it. Like unless you're influencing the training data, right? So anyways, — that's kind of what I wanted to go to is then like the big question for any business is then how do you influence the training data? And you're saying the training data updates with every model. — The training data does not update with every model. It updates independently of models and it's extraordinarily expensive. That's why you see Gemini 3 on January of 2025. Gemini 3. 1 also January of 2025. You see all of them are January of 2025. And so the model let me see January of 2025. Yeah. So yeah, and this one's 2. 5 flashlight previews. Yeah. So they update differently. There's a there's an algorithm and there's a training data and they're different. — Yeah. My understanding from the OpenAI perspective is that it's it's a pretty it's like black boxy that OpenAI doesn't reveal what goes in the training data, how often they update it. Is that true or do you have any insight? — I actually honestly stopped looking so much at GPT because my like my analysis here and everything points to the fact that I think Gemini is just going to win. Like I think GPT is going to like go to like obscure relevance. — When they move to AI mode is what you're saying like it'll just win. — Yeah. Why? — Why? It's because they have they get search like when you search as a model your answers are like anywhere between 10 to like 200% better. They're just better and they get free access to search and all the Google like you know everything that Google. — Yeah. I mean this is a little bit philosophical and outside of like work for marketers but that's true for all of this stuff which is what what about this best for search type queries. — Yesqbt people use for stuff completely different from that. — Right. — Give me like that's like a different Yeah. to know though like I also just think like usage of chat GPT is going down. — Yeah, it's going to Claude for workflows and then Gemini for information retrieval. — I canceled my subscription like last week cuz we just started using claude for everything. I'm like I just don't see a purpose to chat GPT anymore. And I think that's I mean that's the case with all these models. I think like what Bernard's saying, I think certain models will be used for certain use cases. Like Gemini is clearly the like productivity app or like the work winner. And even the lead that we got yesterday, she came from Gemini. And so she's in Gemini for some purpose for work as well because it syncs to your Google Drive and it syncs to all your other products. — It's interesting that you do think that

### [41:00](https://www.youtube.com/watch?v=DTOSjwfkkBk&t=2460s) Why Gemini is going to win — and the future of content strategy

that's going to win. — Oh, it's going to for sure win. win in my opinion like any so that's why we're not even really like trying to build for chat GPT like anymore. Um anyways that's just our own opinions and philosophy. Okay, this is like in my opinion the most fascinating component is that Gemini sources source URLs are essentially random after day one. So within sampling what we find is that Gemini is actually just randomly pulling sources from like a huge bank of possibility. And what I feel like this means to me as like somebody from search is that this is basically Google doing user engagement testing. Right? Google is basically rebuilding and re-rebuilding responses on the fly using different sources to then understand whether or not those responses are, you know, positive sentiment, good engagement, and whatever the new rules of LLM like looks like. This kind of correlates I don't know if it means the same thing to you but in the last two months we've seen so much volatility in ranking positions and just like yeah once we publish something it shows up on the first page and moves down like almost every day it's been different. — Is that kind of what you're saying? They're like testing like let's say any new piece of content that's published around the topic they're now cycling through a bunch of different pieces to figure out which one's best. — Yes. not which one's best, but which one's best to be used to construct or influence the response that it was going to give back. Anyways, so you know, like it might look at your source and it might you might be talking about whatever top SEO trends and you know, maybe the grow and convert talks about, you know, bottom of funnel and use case pages and the long tail. And so in that one particular response, it will look at you and it'll say long longtail painoint SEO bottom of funnel. And then if that user like, oh, tell me more about this longtail thing. this. — Oh, I see what you're saying. — Then it's like, okay, that pays — attention to that. — Exactly. That's this struck a nerve with the person. I mean, this is so interesting because then it just makes the case for what I was saying at the very beginning where the future has to be very specific good information like that that's tailored to the user. And I think like this whole wave of like use AI to produce content faster, but it's still just the same that we were producing before doesn't even really make sense. you can produce content faster. But like if we're looking at this and I think this is kind of like we're in the process of writing a post right now just on like our content strategy for the future of like AI search and it ties exactly with what we're seeing here with it. So it's validating for what we were thinking. But essentially, you need to write very specific content around all those various topic areas because I can imagine too, as time goes on, those waitings might change as the LLM figure out when someone searches this, what do they actually mean by this? Exactly. Yeah. Or or like — what is the correct strategy or what do people seem to latch on to like your user engagement signals that you were just talking about? If you like this video, don't forget to subscribe. You can also get the audio only versions of these shows wherever you get your podcasts. And you can follow us at growandconvert. com/newsletter for any articles and updates for when these videos come

---
*Источник: https://ekstraktznaniy.ru/video/46211*