OpenAI Researcher QUITS — Says the Company Is Hiding the Truth - (It Actually Worse Than You Think)
10:39

OpenAI Researcher QUITS — Says the Company Is Hiding the Truth - (It Actually Worse Than You Think)

TheAIGRID 16.12.2025 56 586 просмотров 1 664 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Checkout my newsletter : - https://aigrid.beehiiv.com/subscribe 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Learn AI With Me : https://www.skool.com/postagiprepardness/about Links From Todays Video: https://futurism.com/artificial-intelligence/openai-researcher-quits-hiding-truth Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (3 сегментов)

Segment 1 (00:00 - 05:00)

So, some OpenAI researchers have quit saying that the company is hiding truth. No, this isn't clickbait. This is a real article and OpenAI have actually hid the truth this time. So, it's important you know what that truth is. So, it's this article from Wired that is pretty crazy. It talks about how an OpenAI staffer quits alleging that the company's economic research is drifting into AI advocacy. So essentially what they're saying is that four sources four close to the situation probably OpenAI insiders have basically started claiming that OpenAI is hesitant to publish the research the real research because it's so negative on the impact of AI and the company says it's only expanded the economic researchers team scope. Now, essentially what they're saying here is that OpenAI is essentially emphasizing productivity gains while actually downplaying the job losses, framing disruption as temporary or manageable, and it's avoid publishing the research that could fuel regulation, spark public backlash, and slow adoption. So, this is crazy. This is genuinely crazy because if OpenAI is a fully transparent company, they're supposed to be publishing that research. But as soon as that research becomes against their best interest, they're just simply not talking about it. And you have to understand why this is so important because just because something is in your best interest, it doesn't mean that you don't reveal that information. And I'm going to show you guys how other companies have taken a different approach and why this just looks terrible for OpenAI. So you have to understand that like this isn't just a you know small situation. This situation got to the point where people got so frustrated at least two employees working on the economic research team quit entirely. Now think about that guys. In order for you to quit a company like OpenAI where they're probably paying you a very decent salary, there is probably some sort of stock compensation and the company has been growing in valuation. In order for you to walk away from that, that has to be a pretty ethical reason. Most people don't walk away from a heavy six-figure salary or a really high salary for a good reason. Which means that what did those reports say? have in them to the point where they weren't being published and those researchers have decided to simply quit? I think that extent shows a clear breaking point because we've seen this pattern before that I will talk about later. And honestly, those points where OpenAI employees quit have been key moments where key things have happened inside the company. So, you can see one of these employees was an economics researcher, Tom Cunningham. And in his final parting message shared internally, he wrote that the economic research team was veering away from doing real research and instead acting like its employers propaganda. And I mean, that is terrible. Are we reaching the point now where when OpenAI publishes a research report on the economy or even AI adoption, we can no longer trust it because those researchers that are hired by OpenAI are clearly having some kind of incentive to sort of nudge that research in a way that favors OpenAI's position. That is of course inethical. I'm going to take a wild guess and say that that's probably what OPAI may have subtly asked those individuals to do and that's probably why they felt the need to quit. I mean, why else do you quit this if you aren't being asked to act like the propaganda arm in this situation? Honestly, that's the only thing that makes sense to me. So, this entire article is kind of showing me that OpenAI is realizing just how bad AI could be for the economy in various scenarios. But, of course, they're just simply not choosing to publish that research because it's going to damage their public image even further. So, rather than be transparent, they're choosing to simply hide that research. Now, the crazy thing about this is that it doesn't seem like OpenAI are building solutions. One of the researcher, OpenAI's chief strategy officer actually sent a memo saying that the company should be building solutions and not just publishing research on hard subjects. Says that my point of view on hard subjects is not that we shouldn't talk about them. Quan wrote rather because we are not just a research institution but also an actor in the world the leading actor in fact that puts the subject of AI into the world and we are expected to take our agency for those outcomes. And that is very true. OpenAI shouldn't just be publishing the research. They should be pioneering the field on how you actually implement AI in a way that is actually beneficial for society. Now remember guys, AI doesn't have to be bad for economy, for society. It can be implemented in a way that is actually good for society. Right now we are just simply taking the reckless path. Now if you're actually wondering what OpenAI are actually doing, I found this document which is called AI at work. OpenAI's workforce blueprint. Now, basically this is a document that understands that AI will increasingly reshape work over time and this is some of I guess you could say their plan to sort of implement AI into the workforce

Segment 2 (05:00 - 10:00)

in a way that is somewhat transformative and not just replacing everyone. You can see here it says that if these trends accelerate quickly, they could push the first rung of the ladder out of reach for many new graduates at a faster rate than new jobs are created, requiring a strong response from companies and governments. Well, governments are probably lobbyed by the AI companies anyways. And so they talk about what that response looks like. It begins with accelerating AI education and training efforts so the new graduates and the rest of the workforce would be better prepared for what's coming next. Now, I don't want this video to be a complete downer on OpenAI, although not publishing the research is absolutely terrible. They do say that, you know, 2 million Americans already have engaged with the OpenAI Academy platform. Through OpenAI certifications, they're going to help 10 million Americans improve their AI skills by 2030 and give their employers the confidence they need to hire with trust. And through the OpenAI jobs platform, they're going to help futureproof the workforce by giving millions of people a path stability and growth through better jobs and long-term security. The only thing I do have to say is I hope that they actually live up to these expectations, provided that even if their company is still struggling, because these AI implications are clearly coming. Now, it's important to note that this isn't the only time people have left OpenAI. Like I spoke about in the beginning, there have been several different times that people have left OpenAI for various different reasons. I mean, you know, people have left for various different ethical reasons. Someone left the super alignment team basically saying that, you know, OpenAI just focused on shiny products instead of actually focusing on safety. Then of course you know someone else also left and you know another researcher Miles Brundage left basically saying that he wanted to publish his own research that OpenAI weren't going to publish. And one of the things I found interesting, remember how in the beginning I spoke about the fact that OpenAI doesn't want to publish research because of course it's not in their best interest. But it's so different the fact that a company like Anthropic their CEO is you know completely open in stating that look AI job loss is going to be real and we need to be prepared for it in the sense that at least those white collar jobs 50 to 60% of those are going to be gone within the next 1 to 5 years because AI is already exceeding the average human at those tasks. So, I think that contrast is stark because it just shows a level of a transparency, which is just going to lead more people to want to use anthropics products and just trust them more because if they can actually say the truth of what's going on and not hide the studies and not shelter away from the reality, it just doesn't look good for OpenAI. — I think about AI technology and the fact that as it gets better, again, if we don't handle this well, we have a choice here. I'm not saying we're faded for the bad outcomes, but if we don't handle this well, I am worried that fast growth could be coupled with job displacement for a lot of people. That you could get a much bigger pie, but also that pie could be concentrated more in a smaller number of people and there could be some people who don't get any. That is my concern. And specifically if we look at jobs like as you said entrylevel white collar work you know I think of people who work at law firms like firstear associates there's a lot of document review it's very repetitive but every example is different that's something that AI is quite good at if you think of you know first year at you know entry level work at a consulting company you know these the typical you know, typical entry-level white collar work, right? If you think of a lot of kind of entrylevel administrative work, right? People who coordinate things, who, you know, who schedule things, who organize things, who take notes. If you think of entry level work at a finance company, right, doing routine analysis of financial documents, right? These are kind of the workh horses of entrylevel white collar labor and yet they're things that AI is already pretty good at and AI is rapidly getting better at. Now remember the AI backlash as I spoke about in yesterday's video is only going to keep getting worse. And this is an article that I literally just came across that I didn't even know existed. But it talks about the fact that as generative AI continues to spread throughout society, the push back against the technology and the negative impact are going to grow in tandem. You can see one of the contributors talks about the fact that there is a new sort of ambient animosity towards AI systems. And it talks about the fact that you know in the book about lites rebelling against worker replacing technology, the AI companies have basically speedrun that. It's like a Silicon Valley speedrun. Now, of course, they talk about the fact that the negative response online is indicative of a larger trend right now. Although a growing number of Americans use Chat GPT, many people are actually sick of AI's encroachment into their lives and are ready to fight back. And if you watched yesterday's video where I spoke about this and the fact that, you know, AI CEOs are probably going to change how

Segment 3 (10:00 - 10:00)

they approach public speaking, you'll understand that this is relatively true if you've been online and you've seen how people talk about AI. But let me know what you do think about this. Do you think that it is crazy that someone actually quit OpenAI saying that look they are not publishing the research or do you think that this just even worse for OpenAI in terms of their lack of transparency? There have been so many different scandals and so many different things and I've got to be honest it just does not look good for OpenAI moving forward. There is just so many things that I just feel that they just never tell us the full picture and I really do hope they change that in the future because their public perception is most certainly right now at least at an all-time low. Let me know what you think and I'll see you guys in the

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник