Sam Altman Sparks OUTRAGE With Controversial AI Comment
19:59

Sam Altman Sparks OUTRAGE With Controversial AI Comment

TheAIGRID 22.02.2026 16 153 просмотров 444 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy Get your Free AGI Preparedness Guide - https://theaigrid.kit.com/agi 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Learn AI Business For Free AI https://www.youtube.com/@TheAIGRIDAcademy Links From Todays Video: https://x.com/TheChiefNerd/status/2025184575316471971 Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (4 сегментов)

Segment 1 (00:00 - 05:00)

So, something actually happened in the air world that we have to talk about. Sam Alman's statement recently is blowing up for all the wrong reasons. And this is genuinely one of the most viral things I've ever seen. So, we have to talk about it. So, you can see here it says Sam Alman compares AI training energy to raising a child. And it's not really an exaggeration, but if you look into the entire story, it's pretty crazy because this is by far the most backlash I've ever seen from one particular statement. So let's take a look at the statement that started it all. So essentially there was this interview which I actually covered in my previous videos and Sam Alman was talking about AI in India to an interviewer and they're basically just talking about energy the standard energy problem and then Sam Alman and I will show you guys the full clip so you guys can completely understand it. Sam Alman goes on to say that people talk about how much energy it takes to train an AI model but it also takes a lot of energy to train a human. It takes like 20 years of life on all the food you eat during that time before you get smart. Now, as of recording this video, this clip has around 20 million views. Okay? And those 20 million views, those aren't positive views. Those are probably some of the worst PR I've ever seen in my time covering AI stuff. And I'm going to show you the clip and then we're going to get into all of the crazy statement that kicked off a pretty fierce Twitter debate. Well, I say Twitter debate. I would argue that it's not even a debate at this point. It is a lot of people and I guess rightfully so being mad about this statement because honestly the backlash is crazy. So one of the statements that we hear here is this is the talk of a traitor to the human race. Of course, if Sam Alman is comparing energy training an AI model to energy training of a human, this is going to result in a lot of people outside of the AI space having a negative view on AI. And this is what I'm trying to say. I remember I made a view. video and I said there is a growing proportion of normal people who really just hate AI. And I don't think Silicon Valley realizes that. And I said that in the video that if these AI companies continue to talk in a certain way, don't engage with communities in terms of how they're developing the AI models, they will likely end up in a situation where the majority of the population, even if the AI technology becomes so great, the majority of the population will not want AI technologies because of how it is perceived. And so you can literally see here someone called power to the people says that, you know, he's comparing human life to a robot. They will end humanity unless we fight back. Now, fighting back is insanely true because people are fighting back. And I'm going to show you that after I show you some of the most insane reactions to this. And I say insane because what they're calling for, I wouldn't agree. I would say that you do need discussion. So, you can see there are some extreme reactions here where they're quoting with memes saying, "Shoot that guy. " Of course, I'm not advocating for violence, but I think it is very surprising that someone can quote tweet shoot that guy to a CEO and it receives overwhelming majority support. Like I said, I'm not advocating for violence because that is completely wrong. But I think you need to start to look at how AI is being perceived when you have the average person saying, "Shoot that guy. " after someone makes a statement, and they are the ones receiving overwhelming support. Now, remember how I said there is going to be a lot of backlash and people fighting back? This is a tweet that basically puts things into perspective. I know it's a short tweet, but it's true because he said they are closing the data center. Sam, not right now, please. So, essentially, this is someone who's pro AI and is essentially begging Sam Elman to just stop talking like this because it just messes up the entire strategy. So if you actually want to know the context of this, essentially data center project cancellations actually quadrupled in 2025 with 25 projects stopped after sustained local opposition and that was up just six in 2024 and just two in 2023. So communities across the United States have been fighting back against data centers because of all the toll they take on local resources and you know people living near the data centers you know report water sources soaring electricity you know air pollution all of these things and remember guys all of this stuff is not happening in a vacuum you have to remember that the online like people hate AI anyways because of how it's you know perforated through social media how it's kind of you know destroying the social fabric online and then of course you have this statement where people are like look he's literally comparing all of the life that you have compared to the enemy. Look he's comparing about the energy that you use to train an AI model to the energy that you use for a human. I mean this backlash is basically just adding fuel to the fire. That's why he's like look Sam they're closing data centers right now. Someone in the AI space it's

Segment 2 (05:00 - 10:00)

probably not wise to make this argument. And if you want to know just how crazy the backlash is, the backlash is actually so real that a Democrat flipped a reliably Republican seat in the Virginia legislature by running a campaign focused on the burden of data centers. And even Trump has said that he never wants Americans to pay higher electricity bills because of data centers. And you can take a look at this tweet here. says that $98 billion in planned AI data center development was derailed in a single quarter last year by community organizing push back according to data center watch more than all disruptions tracked since 2023. Now I don't want to get off track here talking about data centers and stuff but of course that is one of the you know reasons why the AI energy cost conversation is happening. Of course these AI models do take a large amount of energy to train the frontier model and because of that individuals who are affected by these you know regional changes or whatever they are facing issues because of that and those are real concerns like I do think that it should be handled in a better way. Now if we get off the data center conversation for a moment. Let's say we want to actually analyze Samman's you know statement and this is uh where there is a statement from David Fairchild and this is a statement that is really interesting because this was probably one of the posts that had I think you know over a million views and this one had a lot of support like maybe 50,000 you know likes and retweets. So, this one was super popular. And he says that he's not just defending AI energy use. He's smuggling in a whole anthropology where humans are basically inefficient meat computers that you have to pour food and years into before they become useful. And once you accept that the next move is obvious, if people are just costly biological training runs, then burning mountains of electricity to build synthetic intelligence starts to not only feel equal, but superior, even if it negatively impacts humans. That is dystopian. It makes human development sound like a bug in the system. It makes sacrificing human and creational flourishing for more computational power sound illogical to him. The grid gets strained, prices go up. But hey, humans eat too, so what's the difference? And of course, we all know what the difference is. I mean, he clearly says here that the difference is that the humans aren't an inefficient line item. They are the point. If your worldview can look at a child growing into an adult and describe it as energy spent to train intelligence, you haven't said something profound, you've revealed a horrifically rotten worldview. And if you didn't understand that, because I think some people in the pro AAI space might not understand why people are so, you know, upraged by this, this statement essentially describes why people are so upset about this. Because I don't think there is a single human alive that views humans as an inefficient line item. So when Sam Alman says this and I will show you guys a full interview clip in just a moment, you know, maybe like in 2 or 3 minutes, but I think it just looks terrible. Genuinely, you can see that someone says, "We really got to redacted these guys, man. " And once again, this was another extreme reaction which had a lot of likes, a lot of retweets. Okay, and this is why I said this AI conversation of AI backlash is probably only going to get worse as companies start to roll out more products and services. And I believe that they really do need to change what they are offering if this AI is going to get better because I think that AI can be something that is good for society but not when it is marketed as something that should going to completely ruin your career and be along those lines. Now you can see here that someone said anyone who talks like this about humans should not be allowed a job that in any way impacts other humans. And then this is where we get into the sociopath conversation because we've actually had a lot of people say that Sam Alman is a sociopath. And I'm not saying that he is, but I couldn't help but see it just numerous times. And it says, you know, someone says he's saying a really big spreadsheet and a baby are morally equivalent. One reason to believe that this life is so divine is that you don't allow sociopaths like this anywhere near anything important. Now, like I said, every tweet I'm showing you in this video is a tweet that genuinely has a lot of support. And I was diving into this Sam Alman sociopath thing. And I'm not trying to support that. Look, this guy's a sociopath or whatever, but there is a lot of information out there that shows a history of behavior that isn't exactly the best. If you type into Google, is Sam Outman a sociopath, you come across this article by Emil P. Torres, okay? And it talks about long history of menacious, manipulative and abusive behavior. It talks about the fact that senior employees at OpenAI describe Alman as psychologically abusive. Some people saying he's highly toxic, was accused of pitting employees against each other. Some people who left to work for Anthropic said that Sam Alman was a

Segment 3 (10:00 - 15:00)

person of low integrity who had lied to employees. They weren't allowed to say negative things about the company. Alman has repeatedly claimed that he hasn't, you know, got a stake in OpenAI, but he benefited from OpenAI startup fund. Also, there was stuff about Alman that, you know, he was fired because he wasn't consistently candid in his communications. And Helen Toner provided several examples of that behavior. So, I mean, the problem here is that I don't know what's going on with Sam Alman and the statements that he's making, but over the time and over the years, this guy's reputation has just been getting worse and worse as time goes on because the narrative or the framing or however, you know, Samman is currently being, it just doesn't look good, you know, from the outside looking in. There hasn't really been often times scenarios where people are looking at Sam Elman from the outside looking in and you're seeing something that looks like it's in a positive light. Now maybe that's because of course negativity spreads more than positivity. That is of course pretty true. But from him being fired from the company, people calling him a sociopath and all this stuff and I've just seen so many videos on you know the lies of Samman this and that and that. I mean it just doesn't look good. So, I mean, this isn't, you know, helping to the situation when you've got a statement like this and people are saying that, you know, you're a sociopath. I mean, overall, I mean, public sentiment on Sam Alman, especially when you're the CEO of one of the largest air companies in the world that people interact with on a daily basis, it just doesn't help your case. And so, of course, one of the most interesting things is the math. So basically this guy decided to talk about the math of the entire situation because of course that is one of the main points and he says you know if you want to run the actual numbers it actually doesn't help Samman's point here because training a human for 20 years costs roughly 17 megawatt hours of food energy total and that's every calorie every meal for two decades but training GBT4 cost 50,000 to 60,000 megawatt hours of electricity that's 3,000 times more energy than raising a human to adulthood. And the problem is that GPT4 is already obsolete. Nobody uses that model anymore. The next model will cost even more and the one after that will cost more again. And each generation of hardware strands the last H100 rental rates collapse to 60 to 70% because Nvidia keeps shipping chips that makes last year's chips worthless, which is super true. They got a crazy rate of development. And Alman is now asking for 10 gawatt for Stargate. That is the total power consumption of New York City. And PGM's last capacity auction fell 6,623 megawatt of short reliability targets and prices exploded 11x overnight. Schneider Electric projects a 175 gawatt national shortfall by 2033. And nobody is debating whether or not AI works. The question is whether the man comparing a data center to a lunchbox has adequately explained who pays for the grid he needs and what happens when it is not there. And essentially he's basically saying here look okay the fundamental point is that Alman's comparison is off by roughly three orders of magnitude. Training one air model uses about as much energy as training as raising 3,000 humans to adulthood. And that's just GPT4. But the models being trained now are significantly larger. And the last sign is essentially the kill short here because the analogy isn't wrong because it's sloppy. It's wrong because the numbers and comparisons don't add up anyways. And so when you have a situation where you're making a point and the math is wrong. I mean even though and I like I said I will show you guys the full context even though it is slightly taken out of context it doesn't really make it that much better. Now I like I said before we have this real situation of the framing problem. And Mert here is essentially making a strategic PR argument and it's not a moral one. He's essentially saying that, you know, AI leaders are so bad at accidentally creating their own political opposition. He's basically saying, look, he says, you know, current AI leaders are so culturally autistic that they might actually cause the socialist to gain power. Instead of framing AI as the next inspirational race, they framed it as the orphaned newborn of Terminator and Ultra. So, that is one of the truest statements, and I would agree with this opinion wholeheartedly. When you think about what AI is, AI could literally be sold to the public as an inspiring aspirational project like the space race was something that lifts everyone up, creates national pride, pushes humanity forward. Instead, you know, people like Alman are making AI and marketing it in a way that makes it sound like a threat to ordinary people, pairing humans to inefficient machines, dismissing energy concerns, and treating people as line items. And of course, the Terminator and Ultron part is the punch line because if you continue to make people afraid of it and resentful of the people building it and when you scare people enough, they are just going to vote for people who promise to rein it in, hence causing the socialists to gain power. And the point

Segment 4 (15:00 - 19:00)

here is that even if you are pro AI, you probably should be a little bit frustrated with Sam Alman because they're just handing ammunition to the people who want heavy regulation or redistribution. And the tech is, you know, it's pretty decent. But if you're going to be this tonedeaf, you are going to end up ruining it for everyone. And so when you take a look at what more AI optimists are saying, I think this statement is, you know, clearly being supported. This guy says, "I build AI for a living. I believe in what we're building, but this kind of rhetoric makes my work harder and more dangerous. " Sam Alman comparing human development to model training is tonedeaf and strategically reckless. People are losing jobs, which is true. People are getting angry and they're seeing AI as an enemy instead of the solution. And some are planning to destroy data centers and the people who build this stuff. And that anger and backlash might not be reaching your floor, but it does reach the engineers and builders doing the actual work. The CEO of the most visible AI company should not frame humans as inefficient compute units should not be anti-human. Your role as a leader is to show how AI solves real problems for humanity, not to reduce human life to an energy accounting problem from a comfortable position. If someone working in AI gets hurt because the public narrative has turned hostile, leaders like you who choose dehumanizing framings bear the responsibility for that too. And I'm a technooptimist and I believe that AI enhances human capability. I work with this new form of intelligence every day and I genuinely respect what it is. It is real, significant, unlike anything that has existed before. But I also believe in human excellence. We have to accept that it's two fundamentally different forms of intelligence working together. And the last statement that he says here, I think is really important. The real technooptimist position isn't that AI is cheaper than humans. It's now we have two forms of intelligence on this planet and the combination is more powerful than either alone. If you're the leader of open AI and whether you choose it or not, you represent everyone building in AI right now. every word you say shapes how the world sees this technology and the people behind it. Please act like it. And I would agree with that statement wholeheartedly. If you're going to be the face of open AI and you're going to be out there saying stuff, you need to be extraordinarily careful to ensure that you're framing AI or at least understanding that AI should be a technology that helps human and works with humans side by side, not as this crazy technological advancement that is just going to replace every human, make people lose jobs, and humans are just inefficient meat items. And this is the problem with Silicon Valley. For the longest time, you know, a lot of people in there have been seen, and I'm not talking about the average person, I'm just talking about those who are the leaders, have been seen as not in touch with the rest of society. You can see here, another AI optimist says that this alone may have lost my trust in Sam Alman to build a good AI company. I understand the point he's trying to make, but this is trying to break down people and models into cost for output and ignoring the value of humanity itself. and individuals now seem to be rooting against OpenAI. This person says, "I sincerely hope OpenAI goes down in flames somehow when it seems messed up in their head. " And I think you have to understand that if you're a company who's trying to raise money from investors, you're trying to have an API, IPO, you're trying to get your company out there to essentially power the next generation, the next industrial revolution, it doesn't make sense for you to be saying certain things where it can be taken out of context and people are going to rally against you. And I mean, I want to know if you guys have seen the full context. I wanted to include this just so that it isn't, you know, a complete narrative shift. But take a look at the full context here because the context happens just before Sam Alman makes that point. And so I think it isn't as bad when you do add the full conversation, but nonetheless, you still need to change the framing. So take a look at the full context. So this was said just before Samman said, you know, made the comparisons to humans, you know, and of course train. And so yeah, that is the full context of the clip. And when you see it like that, I guess his actual argument is more nuanced than the, you know, cutout clip. He's not, you know, saying that training an AI and raising a human cost of the same. He's saying that once a model is trained, the per query cost of answering a question is potentially more energyefficient than a human doing the same task, which I guess you could argue is a more narrower and a more defensible claim. But it's still kind of a dodge because when you think about the energy as a whole, the total training cost doesn't disappear just because you reframe the comparison as a per query. Someone still paid 50,000 megawatt hours to train GP4 and the next model does cost more. So when you narrow the frame to per inference, you ignore the cumulative inference structure and the footprint that people are actually concerned about, which he himself admits, you know, is real. So for full context, it does make him look somewhat better on the specific inference comparison, but worse on the water claim. And the training a human thing is obviously going to go viral.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник