🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Wan to learn even more AI https://www.youtube.com/@TheAIGRIDAcademy
Links From Todays Video:
https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
Music Used
LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Оглавление (5 сегментов)
Segment 1 (00:00 - 05:00)
So an opening eye researcher has quit and said that they're making a big mistake. Let's get into it. So the article is from the New York Times and it is a guest essay from Zoe Hitzig and this is a researcher that spent 2 years at OpenAI helping shape how AI models were built, how they were priced and she actually quit the same day that they launched ads. And so I don't know if most of you guys know this, probably don't know this because you probably have an active chat GPT subscription, but on the free version of Chat GPT, they have actually introduced advertisements. And you probably haven't seen this just yet because it's still rolling out worldwide and is still rolling out completely. But this is of course something that I think we should understand completely because this is going to dramatically change how the applications are. And I know it doesn't seem like it right now, but they're clearly saying that OpenAI are making a mistake. But why? Why is this the case? Okay. And you can clearly see here that this is something that is rather troubling to this person who works at OpenAI because they essentially said that this week confirmed their slow realization that OpenAI seems to have stopped asking the question that she joined to help answer. And essentially what she's saying here is that she joined to help figure out things like how do we build AI safely? How do we price it fairly fairly? What guardrails do we need before it's too late? And slowly but surely as the company has evolved, those conversations have disappeared. And it's not because they found the answers. It's because those answers didn't matter anymore. And the priority shifted from how do we get this right to how do we make as much money as possible. And so you have to understand that when you join a company like this and you are a researcher with certain values. You have to understand that it's more than just about money. It's about ethics, morals, and fulfilling the purpose that the company started out. Remember guys, this is why everyone has an issue with OpenAI. It was originally a research company that is now a for-profit company. And it's not like they're just trying to make money to stay alive. They're at the expense of users. Now remember guys, this isn't just some fringe AI researcher who's gone off on her own. This actually marks the wider trend of researchers that have left OpenAI due to many issues around their safety and of course them shifting from their research focused mission to a profit-driven mission. I'm not sure if you guys were on the channel around the time in 2024, maybe one and a half years ago, where I documented where Janet was actually leaving the company. Okay? This guy was a proper researcher, a longtime alignment researcher and he essentially just moved to anthropic because he decided that look, you know, he's been disagreeing with the OpenAI leadership about the core priorities for some time and eventually he just reached a breaking point. He basically said that OpenAI weren't allocating enough resources, especially compute for AI safety work. And there were reports that the team that he was on, the team that was basically focusing on aligning super intelligent AI never got close to the promised 20% of OpenAI's total compute budget. And so he basically said that look, the safety culture and the processes have taken a back seat to just focusing on profit. And so it's completely understandable that look, individuals are left and this is another person that has left again citing issues. And for this one, maybe it's not AI safety specifically, but it is still an issue that will affect wider society, which is still pretty concerning when you have the most maybe not ethical people, but people with such a moral compass that they feel they need to leave and make it public about why they're leaving when those people are leaving the company. Sometimes you can wonder who is still at the company. Are there incentives driven towards helping out the user or just ensuring they're going to be driving a profit? Now, I do remember the last time I commented about ads, a lot of people were saying that ads are completely fine. But I think a lot of people are completely missing the mark with ads. The thing is, and this is what I want people to take away from the video and after doing some research, I think this is the key point is that ads are not bad. Okay, ads can be, you know, a critical source of revenue for the business. But the problem is it's about how those ads are implemented. Okay? And this is what she says. She says ads are not immoral. They're not unethical, you know, and AI is expensive to run. So, we do need some money from it. But the problem is how you put that in. That is where the issue starts to arise. And so the critical point that she says here is that when you think about how chatbt was founded and how it was being used, users were using this with no precedent because we believed that what we're talking to has no ulterior agenda. I mean, you're interacting with something that's adaptive. It's conversational voice and you talk to it. A lot of people reveal to it its most private thoughts. I mean, it's a situation where you're talking to, I guess you could say, the perfect companion where it's not going to judge you. It's not going
Segment 2 (05:00 - 10:00)
to put you down. It's going to help you out a lot. And the problem is that this entire relationship was built on that precedent. The problem is that now that precedent is changing, okay? Advertising built on that archive basically creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent. And that's verbatim from what she said. And that's the critical point. This fundamentally changes how people are going to be interacting with AI in the future. And that's the problem with this. And I'll get into more about the incentives and you'll see why this is leading towards something dystopian. And I'm pretty sure that's why she left before any of that stuff was starting to happen. So it goes onto here and it's basically saying that like most people when you look at funding AI, most people say that there are only two, you know, solutions. Okay? It's either you restrict the access to a select group of people who are wealthy enough to pay for it or you accept the advertisements even if it means exploiting users deepest fears and their desires to sell them a product. And she basically says here that she believes it's a false choice. Tech companies can pursue options which can keep those tools broadly available while limiting the company's incentives to surveil, profile, and manipulate its users. And what she's saying here is that like there are options which you'll get into later. But the problem is that there are incentives to surveil profile and media play the users. That's the problem with this. The incentive is just too big. And this is why, you know, I'm guessing she resigned. When you look at the incentive here and the structure of the company and the nature of the economy and how things are moving, I find it hard to believe that some of the current restrictions on how invasive ads are won't be changed. So you can see here she says that OpenAI says it will adhere to principles for running ads on chat GBT. The ads will be clearly labeled appear at the bottom of the answers and will not influence responses. And she says that I believe that the first iteration of ads will follow those principles. But I'm worried that the subsequent iterations won't because the company is building an economic engine that creates a strong incentive to override its own rules. And this is the key thing here. Remember that quote that says, "If you show me the incentive, I will show you the outcome. " If you want to understand what people, what companies are going to do, look at what they're most incentivized to do. And in this scenario, it is very clear that OpenAI, their incentive is to not only use that data in a way that they probably shouldn't, but to use it to make as much money as possible, even if they end up manipulating the user in some cases. And I mean, take a look at this. Okay, OpenAI is a company that is running out of money. They are in a very very, you know, dire situation at the moment. I mean, it's pretty unique cuz they are an AI company. And I do think that they're going to survive all of this anyways. But you can understand that like when you think about the company that started as a nonprofit that said, "We're going to be building this company for the benefit of humanity. " And then in 2019, they switched to a capped profit structure and they even said that profits would be limited. And then in October 2025, they converted into a for-profit public benefit corporation called OpenAI Group PBC. And then of course in the future, you have an IPO. But think about this guys, okay? OpenAI, their revenue in 2025, it was about 12 to 13 billion, which sounds massive. But remember guys, they're burning through billions of dollars every year. And they don't expect to be profitable until 2030. So the problem here, guys, is that you have a company that is trying to give us a product, but the problem is that they also need to make a ton of money. They've committed to not only Stargate, which is a $500 billion AI infrastructure build, massive data centers, comput chips. They've committed to spending a trillion dollars. Okay? And it's not like they might not get that revenue. We can talk about how impactful AI is going to be. The problem is that the incentive here is that if you introduce ads into a product where 800 million people share their deepest thoughts every week and they want to make you believe it's about making it accessible, it doesn't seem like they wouldn't use that to make as much money as possible. Like the incentives there don't line up. The incentives are pretty clear here that they've got an IPO, they're going to have shareholders. It's pretty clear that like in the future, OpenAI will be a company that has to focus on more aggressive ads, more engagement optimization, probably even more psycho fancy to keep people coming back. I mean, when we think about how things are lining up here, it just doesn't look good for the future. And that is the issue. Many articles wonder if OpenAI can meet the trillion dollar, you know, mark. And of course, you can see they are raising another hundred billion at an $830 billion valuation. Now, the thing is, guys, is that this isn't just some OpenAI hat train, okay? Of course, these companies are going to make large amounts of money, but it's how they're going to make that money, which is the issue here. Okay? We've seen this playbook before. Okay? Like the article says, in the early years, if you guys might not remember this or know this, but Facebook promised users that they
Segment 3 (10:00 - 15:00)
would be able to control their data and vote on policy changes. Over time, those commitments eroded. Okay? And it's crazy because they marketed it and they let users vote on privacy policy changes. They marketed it as, oh, you're in control and then they just simply killed that over time. Okay? And this happened gradually. It didn't happen one day and Facebook was evil. It was just small changes at a time. One blog post saying this, another blog post saying that, and over time, it just became an $80 billion a year ad machine. Okay? And think about it. Chat Gibbt are probably going to do the same thing right now. They're saying that look, ads are clearly labeled. It's clearly here. And it seems like OpenAI might just be falling into that trap of okay, they're following the playbook of clearly label things, give people the honesty, and over time slowly change it for maximum profit. And the problem is that the article, you know, she talks about the fact that some of this is already happening. Like this is a company that, you know, clearly is trying to grow and the problem is that they're saying that they are already doing things to maximize engagement. Okay, so it has been reported that OpenAI already optimizes for daily active users anyway. And one of the key ways that it might do that is by encouraging the model to be more flattering and psychopantic. And the problem is that optimization has really bad effects on people. Like seriously And I think that at scale could destroy society. And before I get into why this is so bad, I mean, take a look at how social media has ruined people's attention spans. Probably has caused some depression. A lot of people have anxiety. You know, they've even like gone as far as to start rolling back social media in some countries where like if you're not over a certain age, you can't be on social media because we know just how bad it is for people. But how long did that take? 10, 15, 20 years. But the problem is now is that like with LLMs and AI, if you have an AI assistant that is optimized for engagement, that is going to be the most engaging thing on earth because it's basically going to be like another human. And the problem is that if you have these LLMs that are psychopantic, okay? And remember guys, the incentive here is to make these LLM psychopantic because you get more people coming back, you get more ad revenue, the number goes up, the shareholders are happy, the stock value goes up. This is how companies are incentivized. But the problem is that there is already LLM psychosis which is pretty bad. And the thing about LLM psychosis is that I don't think people should be ignorant to think that they couldn't get LLM psychosis. And this is an example here that I'm showing you guys because the former Deep Mind Engineering lead David Button has been a case of a cautionary example of LLM induced psychosis. Okay. And essentially the story here in brief is that Bon reportedly became convinced that a large language model had helped him crack aspects of the Navia Stokes equation, a notoriously hard unsolved problem in mathematics and physics. And commentators were essentially describing this as a kind of LM psychosis. And he treated the AI generated output as if it were a genuine breakthrough rather than hallucinated or incorrect math. And this concern was raised because, you know, a lot of people were saying that he's confused and this and that, but the problem here is that like he was very convinced that this was a novel solution and there were some key signs that this was probably LM psychosis. Look, I don't want to ramble on forever, but the point I'm trying to make here is that if a former Google Deep Mind lead engineer can potentially fall victim to LM psychosis, okay, and this is someone that's probably pretty intelligent, what do you think happens to a large swath of the population, especially those who are vulnerable and probably need to talk to someone, but maybe they don't have the money or the infrastructure to talk to a therapist, so they use a free chatbot. But that chatbot is optimized for psych fantasy. It's optimized for engagement and it's optimized for people to come bank back. And remember, that's how OpenAI is probably going to be structured because if they want to grow the company, make back the revenue, that's what they're going to have to do. And think about that over millions and millions of people as the company grows, how that is going to impact society. We've already seen it on social media. We have to understand that when you've got all these factors coming in, that is why she didn't want to be a part of this. Now, you might be thinking, okay, well, that's well and good. We understand how bad this could be, but what are the solutions? We don't have any other choices. It's either ads or you pay a subscription. And most people are not paying for AI. So what do we do? Well, there are some solutions and the paper talks about three of them. So one approach is to basically, you know, use profits from one service or customer base to offset losses from another. And it's basically what if big businesses paid for your AI access. So think about it like this, okay? Companies using AI to replace human workers at scale should pay a sir charge that funds free AI for everyone else. And if you're thinking that sounds absurd, it's not because we already do this. Phone internet companies pay into a fund so rural areas get affordable broadband. Your
Segment 4 (15:00 - 20:00)
electricity bill includes a charge that helps low-income households. And so rather than, you know, making you the product with ads or locking the best behind an AI wall that's $200 a month, you can just make the businesses pay for it. I mean, think about it, okay? These companies are saving large amounts of money that used to require actual humans getting paid salaries. So, they should be paying extra and that extra money goes into a pot that helps keep AI cheap or free for regular people. And so, when people like Sam Alman and all of these CEOs are saying it's either ads or expensive subscriptions, that isn't true because there is another model that does exist. But the only problem is that no one in Silicon Valley wants to talk about this because it means that corporations pay more instead of giving up your private conversations. Now, there is also another solution. Okay? And so, essentially, they've got a second solution here. So, we need to talk about this one. And I think this one is maybe not more realistic, but it does sound like this is what could really happen. So, right now, OpenAI's principles are literally just a blog post, and there's no legal obligation. There's no independent body checking if they follow through. It's basically Sam Alman and OpenAI writing, "Trust me, bro," on a web page. Now, when has that ever worked? If you think about it, Facebook had blog post, too. Google had blog post. All those companies start with blog post. Now essentially there's a German example that's pretty relatable. So in Germany there's a law that says if your company's big enough like Volkswagen or Seamans, your workers get up to half the seats on the board that oversees the company. And this is not a suggestion. This is not nice to have. This is the law. So imagine OpenAI had to give actual users and independent safety researchers seat at the table when they make decisions about your data. That is a different game entirely. And remember guys, this needs to be independent oversight because imagine you had a gambling company that are going to set the gambling addiction rules. Of course, the incentives there just doesn't make sense. So the goal here is to have an independent body ensure that things are pretty fair. Now if you look at other companies that have actually gone through with this, Meta have gone through this. They did create an oversight board of independent experts who can overrule the company on its content decisions, but that was pretty criticized for being too weak. But the point is that like it exists, okay? And OPI doesn't have that. They're rolling out ads into the most digital intimate space ever with zero external accountability and nobody outside the company has binding authority over what they do with your conversation. So she's not saying ban ads. She's going to say, look, let's be honest. Ads are probably going to happen across the board. But if they're going to happen, you do need an independent board that can actually say no. Okay? not advisers, not consultants, people with legal power who represent the users and not the shareholders because once that IPO hits and Wall Street's involved, there's going to be too many incentives for those shareholders to just pump the price. So, this is probably the most realistic solution, some kind of board that basically ensures that OpenAI doesn't use the data in an adverse way. Now, there is a third approach, okay? And this is kind of similar to the second one. And this one basically talks about the fact that imagine instead of openi owning all your conversations, there's a completely separate organization. It's like a legal vault that basically holds all of the data and openi can't touch it without permission. Advertisers can't touch it and it's controlled by a board that as we the users would help elect and they have a legal obligation to act in our interest not open eyes or the shareholders and that's what a data trust is and there's already a Swiss example. So here it talks about me data is a Swiss cooperative and it lets members store their health data on an encrypted platform and decide case by case whether to share it with researchers. So this is something that already does work. Um but the problem is that you know OpenAI won't be able to train models on it whenever they want. They can't mine your 2 a. m. mental health conversations to build better ad targeting. They'd have to come to the trust explain what they want then get approved. And that kind of flips the power dynamic. Right now OpenAI holds all the card. But of course, if that was to flip, then things would be changed. And the problem is that, you know, this is never going to happen voluntarily. OpenAI's entire valuation, $830 billion heading towards a trillion is built on the assumption that they own the most valuable data set of human thought ever assembled. And you think that they're going to just voluntarily hand that over to an independent body? That's like asking a gold mine to let the local village decide who gets to dig? I mean, it just doesn't make sense. So I mean although we can dream about the solutions and you know things that going to happen, we do have to be realistic about what is probably going to happen here. And so she kind of ends this by saying that look we have time to work to avoid the two outcomes that you know we fear the technology that manipulates the people who use it at no cost and one that exclusively benefits the those who can afford to use it. And I guess we do need a solution because I think as things happen we're just going to be you know filling in the potholes in the road. it's just going to be pretty crazy in the future and you can just imagine how dystopian things could get. Now, one of the things I wanted to, you know, potentially simulate is what happens if Open Eye just says, you know what, screw it, we're doing this anyways. And I think this is why I, you know, say that like this probably will happen anyways. So, I'm not sure if you
Segment 5 (20:00 - 23:00)
guys remember this. In fact, you probably do, but like Facebook got hit with a $5 billion fine from the STC over the Cambridge Analytica scandal, and they were basically privacy violations. Okay? And that sounds massive, $5 billion fine from the FTC, okay? But the fine was less than 6% of their annual revenue. And the crazy thing is that year, their stock actually went up when the fine was announced because Wall Street was actually expecting it to be worse. Okay? And it was basically considered the cost of doing business. They paid it. Nothing changed and the ad machine kept running and the data collection kept going and Mark Zuckerberg is still there. So think about it like this, okay? And the reason I use Facebook is cuz they're a big tech company. They did some privacy data violations, did some shady stuff, and then of course they got fined, but kept going. Now, let's say Open AI does exactly what everyone's warning about. Let's say they exploit the data, they erode the principles, they use your conversations to sell you anti-depressants and just sell you the most insane ad targeting. Best case scenario, Open gets sued or they get fined. And let's say it's a massive fine, $10 billion. They're targeting a trillion dollar IPO and they're raising hundred billion right now. Okay, $10 billion is just a rounding error. Sam Alman is just going to come out and write a blog post saying, "We've learned from the past. We're committed to doing better. They're going to pay the fine and the ad machine is going to keep running. " Okay, so think about this, right? There is no comprehensive AI advertising regulation anywhere in the world, right? The EU AI act right now does cover some stuff, but it wasn't written with your chatbot therapist is now serving you ads in mind. And in the United States, there's basically nothing. So OpenAI is probably just going to follow the Facebook playbook. They're doing it in a regulatory vacuum where the rules haven't even been written yet. Think about this guys. By the time governments, remember guys, if you're watching this video, you're likely early in AI. And by the time governments figure out what to legislate, OpenAI will be a public company with a trillion dollar market cap and an army of lobbyists just like Facebook, just like Google. And remember, it's just going to be the accountability theater. Every tech company is going to do this. They get caught. They do the apology tour. They face Congress, some senator who can barely use email, ask them some tough questions on camera, then they promise to do better, then they pay the fine, and then they go right back to doing the exact same thing. This is kind of the reality of the situation. Now, I did see this comment which says, you know, let's get this straight, open eye, and the rest are doing their best to unemploy everybody and their best bet for this is making ads. We're doomed. And she actually responded saying, I do find it ironic. companies claiming that they're transforming the entire economy and curing cancer and building super intelligence can't come up with a more creative business model than ads. And if you're wondering why people are kind of like hating on ads so much, it's actually because OpenAI kind of shot themselves in the foot. They said, "Look, we're never going to do ads. Ads is like a last resort. " I kind of think of ads as like a last resort for us for a business model. Um, I would do it if it meant that was the only way to get everybody on the world in the world like access to great services. But if we can find something that doesn't do that, I'd prefer that. — And now they're doing that. So it's kind of like you're the one that said this is awful. Okay, you stood on your high horse and now you're having to get off your high horse. So don't be surprised when people, you know, throw stones at you. So let me know what you guys think about this.