# OpenAI ABANDONS AI Race?  OpenAI Reveals MAJOR Strategy Shift

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=a9YrdEXOjiA
- **Дата:** 28.03.2025
- **Длительность:** 22:48
- **Просмотры:** 64,333

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://www.tiktok.com/@askcatgpt/video/7485437637357030698?lang=en
https://stratechery.com/2025/an-interview-with-openai-ceo-sam-altman-about-building-a-consumer-tech-company/

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=a9YrdEXOjiA) Segment 1 (00:00 - 05:00)

So Sam Hman actually did a recent interview that most people did miss. But because I'm an avid reader of pretty much anything AI, I managed to catch this secret interview. So basically, there was this interview on this website and you can see it's called an interview with OpenAI CEO Samman about building a tech consumer company. And this interview seemed pretty normal until I realized they actually spoke about something that is essentially the future of chat GPT. So, this company, OpenAI, are going in a little bit of a different direction than most people think. And in this video, I'll be telling you guys exactly why. Some people might even be saying that OpenAI/ Samman no longer care about Frontier models. So, this is big news. But let's actually take a look at what's going on. So, in order for you guys to understand this, one of the things you need to understand is that OpenAI of course now is a large tech company. But when the company was founded, in its early days and being organized, it wasn't as big as it is now, there were no huge management structures. There were no fancy offices. It was just a small research lab dedicated to AI research focusing on building open AI models that they could share with the world in order to build open-source AI. That was the early days. They had a small team. they were just focused on doing that one mission. Now, of course, since then, things have gotten a little crazy, but I think a lot of people don't realize how crazy the beginnings are because it really, really is important for you to understand why the company mission may have changed again. So, when we actually look at the early days, some of you guys may not remember that this was what chat GPT said on the website. Look at this guys. When they released it, they said that this is a free research preview. Our goal is to get external feedback in order to improve our systems and make them safer. And while we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive and biased content. It is not intended to give advice. This was the image that individuals saw when they went on the OpenAI/That GBT website. Now, the reason I'm showing you this is because it shows you that when they made this chatbt thing, it wasn't intended to be this kind of frontier model and this huge consumer product that everyone loved and got, you know, lots of users. But unfortunately, that was just a byproduct of it happening. And the crazy thing about this is that when we saw how crazy this thing erupted on social media, we saw that this was literally the fastest growing application of all. We can see that you know things like Google Translate, Telegram, Spotify, Pinterest, Instagram, all of these other incredible applications. It took them many, many months to get to 100 million users. But chat GBT today is still the fastest application to reach 100 million users. Some people could say threads has surpassed that but doesn't really count because they imported users from another app. So chatbt at 2 months organically and not even doing any crazy marketing just simply saying hey this is a free research preview. This is something that is truly insane. Now, you might be thinking, I've wasted your time here, but trust me, this has set the scene because when you read what Samman and OpenAI are basically stating about the future of the company, I think it's a little bit different than most people do think. So, basically, you can see right here that one of the things that they talk about is the fact that right now there are tons and tons of different AI models that are coming out. So the interviewer asked him in that case or regardless is that augmented by the fact that it seems at least at the GPT4 level. I mean I don't know if you saw today LG just released a new model. There's going to be a lot of I don't know comments about how good it is but there's a lot of the state-of-the-art models. And Sam Alman responds saying my favorite historical analogy is the transistor for what AGI is going to be like. There's going to be a lot of it. It's going to diffuse into everything. It's going to be cheap and it's an emerging property of physics and on its own will not be a differentiator. Essentially, what he's saying here is that the models will get commoditized. Now, the crazy thing is that I didn't even know LG released a model, but crazily it released a couple days ago and it managed to do all benchmarks in the 7. 8 billion parameter and the 2. 4 billion parameter. And of course on the 32 billion parameter, it actually achieved the number one spot on the math benchmark, which is pretty insane. Now, the reason I'm bringing this up is because I don't know if you guys know LG. Most people will know this company as a company that makes fridges or other household appliances. And if they have a small AI research lab, honestly, I'm not

### [5:00](https://www.youtube.com/watch?v=a9YrdEXOjiA&t=300s) Segment 2 (05:00 - 10:00)

sure how big it is, but if they can develop frontier models, it's quite the indicator that we are probably reaching not model saturation, but to the point where Frontier models are all getting around the same level. Now, remember, he compared this to the transistor. And basically, if you aren't familiar with the transistor, transistors were essentially tiny electronic components that can switch or amplify electric signals. Now in 1947 the transistors were rare and expensive and only used in specialized high-end equipment and they were a major technological breakthrough and this is something that only a few companies could make. That's quite similar to how LLMs were around 2 years ago in 2023 with chat GBT. Now decades after transistors became of course you know invented they are now extremely common. There are billions in a single smartphone. They are also very cheap to produce and they are also built into almost every electronic device and a basic component rather than a special feature. So this analogy that you guys can understand is basically saying that you know advanced AI capabilities are probably going to follow this same suit. Currently AI is somewhat limited special to certain companies. You know it's a differentiator. If you have super GPUs and you have a front-end model that kind of makes people flock to your system in a sense. But in the future, AI will become ubiquitous, affordable, integrated into everything and it will be just an expected feature. So just as no company today advertises our product has transistors because every single device does, eventually companies most likely won't promote our product has AI because it's going to be standard in everything. So you can see right here that before we had Chat GBT, Claude, Llama 3, and Gemini, but it's quite likely in the future all of these companies may have their own custom LLMs. And I do think that is possible considering how easy it is becoming to build GPT4 level LLMs. So in the future, maybe 10 years from now, it's probably not going to be that big of a deal. Now, take a look at what Sam Alman says right here. And this is probably the most important point that you can take away from the video. The interviewer asked Sam Elman, "What's going to be more valuable in 5 years? A 1 billion daily active user destination site that doesn't have to do acquisition or a stateofthe-art model asking basically what is going to be the most important thing in 5 years and Sam Alman responds saying the 1 billion user site. " Now, most people did miss this announcement. This is a huge announcement for the AI industry because it means that OpenAI is probably shifting their focus and OpenAI have said in this interview, you know, Sam Alman did say that they do have to execute on a few different things in order to get this right. There isn't one main thing that they can focus on and succeed. They have to succeed on multiple fronts. But him saying that 1 billion daily active users on a site is going to be more important than a state-of-the-art model, it's quite likely that one of the determining factors of OpenAI's future success won't be the kind of models that they build, but will rather be the customer acquisition or the customer experience. Which is why I do believe in the short term and long term, OpenAI are probably going to prioritize the user experience over everything and customer growth over everything. What this means is that it's quite likely that in the future OpenAI might not be at the frontier of AI. I do think that they do hold a lead and they probably will for a long time, but they may actually shift their focus into being a consumer tech company that focuses on the user experience. This is because in the future if AI models are everywhere then open AI won't have a differentiating factor and I don't think they want their differentiating factor to actually just be the AI models because in the future those are going to be everywhere. You can see right here that he says, you know, and this was an interview question that says, you know, about chat GBT, no one expected you to be a consumer tech company. My thesis all along, you were a research lab and sure will throw out an API, maybe make some money, but you mentioned that six-month period of scaling up and having to become and seize this opportunity that was basically thrust in your lap. There's a lot of discussion in tech about employee attrition, and there's some famous names that have left. And of course, it seems to me that no one signed up to be at a consumer product company. And if they wanted to work at Facebook, they could have worked at Facebook. And of course, there's also the other core tension is that you have this opportunity that you have it whether you want it or not. And that means a very different place than it was originally. Now, Samman responds to this basically saying that, you know, I can't really complain, but what I wanted was to get to run an AGI research lab and figure out how to make AGI. I did not

### [10:00](https://www.youtube.com/watch?v=a9YrdEXOjiA&t=600s) Segment 3 (10:00 - 15:00)

think I was signing up to have to run a big consumer internet company. So Samman has been deep in the tech world for quite some time. But here he's basically talking about the fact that look I didn't really sign up to do this but of course this is now something I am thrust in which is of course what I'm saying guys like you know they were one company they've now shifted and it's quite likely they are shifting again in order to ensure that the company stays afloat. Now you might be wondering okay so they've spoken about the fact that no longer will the differentiator actually be the models but what will be the differentiator in these companies in the future. This is where Samman gives us the key information. He says where I think their strategic edges is there's building the giant internet company. I think that should be a combination of several key different services. There's probably three or four things on the order of chat GBT and you'll want to buy one bundled subscription of all of those. You'll want to be able to sign in with your personal AI that's gotten to know you over your life over your years to other services and use it there. And there will be, I think, amazing new kinds of devices that are optimized for how you use an AGI. And there will be new kinds of web browsers. There will be that whole cluster. Someone is just going to build the valuable products around AI. So that's one thing. So one of the key things that they're saying is that there is going to be value in long-term customer loyalty. And one of the ways they're doing that is by focusing on memory. If you guys haven't been paying attention, memory is a huge feature of AI. Imagine every time you signed into a different website that has integrated AI into it, it knew every single thing about yourself. Let's say it knew I was doing YouTube. It knew my age. It knew my weight. And let's say for example, I went over to a health AI. It could say, "hm, maybe your job and your schedule is stressing you out. I recently realized that you've upped the work rate, yada yada. Maybe that's why your health is down this week. " All of that context is going to be super important for future AI models. And I think that is one of the biggest things that, you know, Samman is going to focus on. So, one of the things that you're likely going to see is that these services are likely going to prioritize you not leaving them because of course over time these models are going to be commoditized and it's no longer going to be really important to have a frontier model. So, it's going to be really interesting to see how that shakes out and I can already see that happening. A lot of people use chat GBT because secretly if you have a plus account it secretly not really secretly but over time it will store things in the memory like sometimes I will ask chat GBT a question and it will actually site stuff from my personal life that I forgot I've told it. It will site like names and people and then be like this person can do this for you and I'm like wow I didn't actually remember that. So it's super interesting on that regard. Now you can see right here that he says you know most models except the very leading edge will commoditize pretty quickly. So here what he's talking about is the fact that like if your model doesn't have a true significant leap above the other models you are going to commoditize pretty quickly. Now what's crazy about that is that is very true. The gap between Frontier models every single day is getting shorter and shorter, smaller and smaller every day, every week. it feels like there is a new LLM that inches out the other one by two to 5%. So far, we haven't really seen any massive jumps in capabilities that leads you to want to use one model drastically over another. Of course, there are special use cases. For example, for Claude, you probably want to use that model exclusively for writing, creatively, and coding. But other than that, all of the models seem to be the same. Of course, in specific use cases, I do use different models for different things. For example, I use Grock for figuring out what the best decision is. For some reason, it's just really good at that. For Gemini, I use it for marketing. For some reason, it's really good at that, I found. And for Chat GBT, it's just an all round good model. Now, like I said, most models are going to commoditize anyways. And I do remember when this statement came out, when Sachin Nadella actually said this, a lot of people were saying, "It's so over. Open eye are screwed. " You can see he says, "So when Sachin Nadella said models are getting commoditized, that OpenAI is a product company, that's still a friendly statement. We're still on the same team here. " And he says, "Yeah, I don't know if it came as a compliment to most listeners, but I think he meant it as a compliment to us. " And it's true. He actually said that OpenAI is a product company. And I think that is probably more valuable than the model side because as these models get commoditized and become more common and anyone can build them, well not anyone but any major company with enough GPUs, the real value is going to be having a customer experience that people want to be a part of, not just some LLM because that's going to be really common in the future. Take a look at Satin Nadella. If you wanted to just

### [15:00](https://www.youtube.com/watch?v=a9YrdEXOjiA&t=900s) Segment 4 (15:00 - 20:00)

to prove out that doesn't make sense. I do believe the models are getting commoditized in fact OpenAI uh is not a model company it's a product company that happens to have fantastic models at this point and which is great for them and great for us as you know both partners of theirs and so I want to go now where I think the industry structure that's emerging um models by themselves are not sufficient but having a full system stack and great successful products those are the two places and so of course if the company is no longer a model company, but a product company. That means they're going to have to monetize the platform in a different way. So, one of the things we can look at is how they're going to monetize the platform. The interviewer says, "Deep research. It's amazing, but I'm skeptical about people's willingness to go out and pay for something, even if the math is obvious, even if it makes them that much more productive. " I look at this bit where you're talking about building memory. Part of what made Google Advertising's model so brilliant is they actually didn't need to understand users that much because people typed into the search bar what they were looking for. People are, you know, typing a tremendous amount of things into your chatbot. And even if you served the dumbest advertising ever in many respects and even if you can't track conversions, your targeting capability is going to be out of this world. And by the way, you don't have an existing bid business model to worry about undercutting. My sense is this. So to counter what everyone at OpenAI signed up for, that's the biggest hurdle. But as a business analyst, this seems super obvious and you're already late. Essentially, what they're saying here is that look, OpenAI has probably one of the biggest search engines. And think about it, oftent times people are searching about different topics, different things to do, different ways to market. And think about it, OpenAI could just input a variety of different ads on the platform. And of course, even if they can't track how those convert, it's probably a very large opportunity for them to monetize. Meta makes billions of dollars in ad revenue every single year, and so does Google ads. So, it'll be interesting to see how OpenAI manages to do this. But Sam Alman actually says something different. He says, "The thing I'd be more excited to try out than traditional ads is, you know, stuff for e-commerce. Of course, there's a way we could come up with some sort of a new model, which is we're never going to take money to change placement or whatever, but if you buy something through deep research that you found, we're going to charge like a 2% affiliate fee or something. That would be cool. I'd have no problem with that. Maybe there's a tasteful way we can do ads, but I don't know. I don't really like ads that much. So, Samman doesn't And I think it's because right now the user experience of LLMs is one that is quite fresh. It's quite intuitive and it's really easy to use. And so I think the ad experience I think it's being delayed until the point where the models are commoditized and you really would benefit from using ads. So I think that's why he's not doing it yet. But he did further elaborate on that. And he says that's always the hang-up. Mark Zuckerberg didn't like ads that much either, but he found someone to do it anyway. and he says, "Just don't tell me about it. Make m money magically appear. " And he said, "Yeah, again, I like our current business model. I'm not going to say what we will and will never do because I don't know, but I think there's a lot of interesting ways that are higher on our list of monetization strategies than ads right now. So, it's quite likely that they're going to monetize the model in different ways, but it'll be interesting to see how they've managed to do that. " Now, this interview was super interesting because it actually gives us, you know, an indication that OpenAI, of course, they're going to be focusing on frontier models. That is, of course, going to be something that they're doing because they're still trying to be AGI, still trying to build super intelligence, but they're also going to really focus on consumer products and ensuring people enjoy interacting with the model. And I think this is really important because now something I actually forgot to mention was the deepseek drama. Now I think this is probably one of the largest reasons OpenAI have said you know what our focus is actually going to be consumer products and that is because with Deepseek they've managed to make models for a fraction of the price and people are like wait a minute if we can have a model for the fraction of the price why don't we actually just go ahead and use those models instead and open eye like wait a minute if that is true and that's the future we're headed towards then we probably need to actually prioritize having a good customer experience over having just a breast Frontier models if China manages to catch up or even surpass us. So I do think that this video you know wouldn't be complete without actually mentioning the fact that Deepseek definitely put a spanner in the works for OpenAI as they are probably now forced to change their model rather than thinking just mainly about Frontier AI research thinking mostly about the correct ways to actually go about providing that model in a way that users do like it. So I do

### [20:00](https://www.youtube.com/watch?v=a9YrdEXOjiA&t=1200s) Segment 5 (20:00 - 22:00)

think that a lot of you know rappers around LLM technology are probably going to be built over the next few years. But now let's take a look at some other stuff. Now they also spoke about something else which is open source. And here he says you said before that the 1 billion destination site is more valuable than the model. Could that flow all the way through to your release strategy and your thoughts about open sourcing? He says stay tuned. He says okay I'll stay tuned. Fair enough. And he says I'm not front running but stay tuned. So potentially there might be some open source from OpenAI. I personally highly doubt it, but we're going to have to stay tuned. Now, of course, he did talk about GPT5. Someone said, I think you should be hiring more software engineers. I think that's part of the parcel, and you need to be moving even faster. And you mentioned GPT5. I don't know where it is. We've been expecting it for a long time now. And Sam Alman responds, we only got 4. 5 2 weeks ago. And you know, the interviewer says, I know, but we're greedy. And this is where they say you don't have to wait. This new one won't be super long. And I think that goes to show how quickly people are expecting new models. Oftent times before it was months and months before you got new model updates. And now it seems like almost every week or two weeks you're getting a new model that can surpass the frontier. So this is a super interesting interview. It shows us to openai their focus is shifting. not just being, you know, a model company anymore, but of course, making sure that they focus on the stuff that's going to be really valuable in the future. This is something that I've been talking about in my community for quite some time now. I do think that having people that use your product and know who you are is probably going to become one of the most valuable things because when we think about how the future is going to be, it's like software. Software is going to be really, really popular in the future. But the moat isn't going to be the software if everyone can create software. it's going to be how many people can you get to actually use your software. That's where the value is going to lie. And I think OpenAI actually realize distribution is going to be super important in the future. And whilst yes, state-of-the-art models matter, 1 billion daily active users matter more. So the things are going to happen here where you know if you potentially have a startup that might get acquired by OpenAI if you have something that's doing well there's the potential that OpenAI might make their own version and undercut you because they've been known to do that. And it's probably quite likely that we're going to get a range of different OpenAI consumer websites like Sora and other products that are going to allow you to interact with AI in a really interesting way. With that being said, if you guys have enjoyed this video, let me know what you guys think. This one was super interesting. It flew under the radar, but I managed to catch it. I will leave the Tik Tok video where I actually did first find out about this because I want to give credit where credit's due, and you guys definitely should check that page out. But if you guys have enjoyed the video, I'll see you in the next

---
*Источник: https://ekstraktznaniy.ru/video/13145*