OpenAI Just Hit a MASSIVE Roadblock ...
14:35

OpenAI Just Hit a MASSIVE Roadblock ...

TheAIGRID 27.04.2025 39 215 просмотров 836 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Learn AI Free for the first 30 days- http://brilliant.org/TheAIGRID Join my AI Academy - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: https://notforprivategain.org/ Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (8 сегментов)

Intro

So things just got a little bit more difficult for OpenAI. Recently there was an open letter called not for private gain that was published on a website with the same name and was signed by dozens of former OpenAI employees. And this letter urges the attorneys general of California and Delaware to block OpenAI's proposed restructuring into a for-profit entity. essentially arguing that such a move would violate the nonprofit's original mission to ensure AGI benefits all of humanity and not private interests. And so today I'm going to dive into 10 key points that you need to know to truly understand what's going on here. So we can see that

The Founding Principle

this letter starts out pretty boldly and it says that we are experts in law, corporate governments and AI representatives of nonprofit organizations and former OpenAI employees. And trust me guys, there are a ton of former OpenAI employees and they're stating that we write in opposition to OpenAI's proposed restructuring that would transfer control of the development and deployment of AGI from a nonprofit charity into a forprofit enterprise. And they basically state that look, OpenAI is trying to build AGI, but building AGI is not its mission. As stated in the articles of incorporation, OPAI's charitable purpose is to ensure that AGI benefits all of humanity rather than advancing the private gain of any person. And so, let's get into the 10 crazy things. And so, the first thing I'm going to talk about is the fundamental betrayal of the founding principle. Most people don't realize that OpenAI was specifically created because the founders believed developing AGI purely for profit like Google was perceived to be doing was extremely dangerous. And its original nonprofit status was meant to ensure decisions prioritize humanity's benefit over making money. And remember, OpenAI are now trying to change to a structure where profit must be considered. And this fundamentally reverses the core idea and goes against the entire reason that OpenAI was established in the first place. And they actually show the founding announcement. It says, "Our goal is to advance digital intelligence in the way that it is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. That's the founding statement of OpenAI. And of course, if they do transition to a for-profit, that would essentially be betraying this logic. We can also see a statement from Greg Brockman here. He says, "On the ethical front, that's really core to my organization. That's the reason we exist. When it comes to the benefits of who owns this technology, who gets it, you know, where do the dollars go? We think it belongs to everyone. " That's Greg Brockman in 2018. And they actually referenced this to show that look, they're going back on what they originally said. They talk about how the restructuring would profoundly change the duties owed by the organization controlling the development and of open eyes technology and that the incentives would shift from enforcable duty owed to the public basically to profits. And you can even see here that they talk about the fact that you know if this does happen it means that you know the long-term mission would be in jeopardy which is what they wanted initially. Remember the entire goal of OpenAI was to make sure that AGI is safe and benefits everyone even if it means no profits are made which is pretty crazy now that you think about it when you also think about how open AI are currently changing their structure. So

Brilliant

now just as we were talking about that I think this stuff is starting to get my brain working and if you enjoy that feeling too you'll probably want a fun way to sharpen your problem solving skills every day and if that's true you have to check out today's sponsor brilliant. org. Now, Brilliant helps you get smarter daily with thousands of truly interactive lessons in math, science, programming, and data analysis, and of course, AI. Seriously, their catalog is huge. Now, what makes Brilliant so effective, and this is why I personally love it, is the hands-on approach. You're not just watching lectures. You're actively solving problems and playing with concepts. It's built on first principles, helping you understand why things work and not just memorizing. Now, this method is actually proven to be way more effective for building real understanding and critical thinking skills. Skills that apply everywhere. Seriously, stop mindlessly scrolling and build a powerful daily learning habit with. So, whether you want to finally grasp calculus, understand the logic behind AI, build your Python skills, or make sense of data visually, Brilliant has engaging courses designed by experts from places like MIT and Google to get you there. Now, you can try everything Brilliant offers for free for 30 days. Plus, you can get 20% off an annual premium subscription by visiting my link on screen. You can also scan the QR code on screen as well. And the link right there is in the description below. Don't forget to check it out. Now, the second

Wealth Transfer

crazy thing that they talk about is the fact that there is massive potential for a large amount of wealth being transferred. So, one of the things that was talked about from OpenAI is that they know that, you know, AGI could create almost unimaginable wealth and they talk about this could have the light cone of all future value. And the original capped profit structure was essentially designed so that the amen wealth would actually go back to the nonprofit which would represent humanity not the private investors. And the letter basically alleges that this cap is being removed likely due to investor demands meaning the astronomical profits from AGI would now overwhelmingly benefit a small group of shareholders instead of being used for the public good as intended. One of the things they actually state that of course it will generate unprecedented economic benefits close to infinite wealth. And that statement is pretty true. Most people don't realize that AJI is a technology that basically impacts everything. So that kind of wealth is on a scale that we really haven't seen before. You can even see once again they also refer to OpenAI's original statements. Right here they have a statement from Sam Alman that says the problem with AGI specifically is that if we're successful and we tried maybe we could capture the light cone of all future value in the universe and that is for sure not okay for one group of investors to have remember people have always spoken about how the fact that you know capitalism basically breaks under AGI and one of the things they talk about is the fact that you know the cap profit structure they're not sure what it's going to happen under the proposed restructuring but there are reports that the recent investments were made on the contingent that they would remove the profit cap. Basically stating that, you know, these people that are now investing in open eyes funding rounds potentially could have unlimited profit from those investments. And that's how OpenAI is driving up billions of dollars in investments. You can see here that they state that this may constitute a massive reallocation of wealth from humanity at large to OpenAI shareholders, which is of course what people don't want. Now

Ownership

there's also the loss of legal accountability. This is essentially where because OpenAI is controlled by a nonprofit charity, public officials have the legal right and duty to ensure that OpenAI sticks to its mission. The proposed change to a public benefit corporation would remove this direct public oversight. And while a PBC has a stated public benefit, the primary legal power to sue the board shifts to shareholders whose main interest is typically financial return, not necessarily the public good and elected officials lose their key enforcement tool. Basically saying that the only people to put these people in line are people that have adverse incentives. So this isn't going to be good at all if this, you know, change does happen. Now a fourth one you know that kind of ties into the first point is that AGI could also belong to investors. They talk about the ownership of AGI and one of the original rules explicitly stated was that you know if they you know create true AGI technology itself would be created by the nonprofit entity ensuring that it's governed for humanity's benefit. And of course, the letter is basically arguing here that this restructuring would mean that the for-profit company and its investors would own and control AGI. And that means that they would give, you know, major investors unrestricted access to the powerful technology which was previously forbidden. And this ownership dictates control and use. And the crazy thing here is that OpenAI has not publicly commented on who would own AGI under the proposed restructuring. It would presumably belong to OpenAI and its investors. Incredible reporting also claims that OpenAI and Microsoft have discussed removing the contractual restriction on Microsoft's access to AI technologies. So once again, OpenAI could be changing things that they previously said they wouldn't. This is where we also have Sam Alman's stark reversal. So in 2023, Sam Alman actually said something rather interesting. He testified to the US Congress highlighting the nonprofit control, profit caps, and other unique governance features that are essential safeguards that ensure OpenAI stays focused on the mission. And less than 2 years later, he's actually pushing to dismantle these exact safeguards, now framing them as obstacles. And this dramatic change is what is raising serious questions about the motivations behind the restructuring. You can see here they state what changed between May 2023 and September 2024 that these safeguards that were important, you know, are now becoming obstacles. These are the statements that he's made and I mean it's honestly quite interesting to see all of this come to light in one entire statement. There is also the abandonment

Abandoning the Stop Assist

of the stop and assist. So OPAI's founding charter actually included a specific unusual promise. If another responsible safetyconscious group did not if another responsible safety conscious group actually got close to building AGI OpenAI would first stop competing and help them to prevent a reckless race to deploy a potentially dangerous technology. You can see here they said therefore if a value aligned safety conscious project comes close to building AGI before we do we commit to stop competing with and start assisting this project. And given how society is now in terms of the AI race and given how OpenAI structure is probably going to change, this letter basically expresses the concern that the new for-profit structure driven by competitive pressures would likely ignore or abandon this crucial safety commitment potentially increasing the global risk. And that wouldn't be surprised. Can you imagine OpenAI stepping down to actually assist another company with everything at stake? I just honestly can't see that happening. Another thing that is rather important is that they talk about how, you know, this control over AGI is essentially going to be more valuable than anything. And one of the craziest things as well is that, you know, this company was founded with charitable donations. It says the law is clear here. Although the public in general may benefit from any number of charitable purposes, charitable contributions must be used only for the purposes for which they were received in trust. When OpenAI was founded, they essentially got donations because they were building AGI that benefits humanity. That was their mission. But since that, you know, donation is now being used to build a for-profit company. This is essentially breaking that initial promise. Now, one

Investor Pressure

of the things here is that investor pressure has been cited. So, they talk about how to justify the restructuring. OpenAI primarily cites investor demands that simply that it needs to simplify its capital structure and refers specifically to market unfamiliarity and refers to with its profit caps and they talk about the profit interest in OpenAI's cap for profits are less familiar compared to its competitors and the challenges OpenAI faces are reflected in its most recent fundraising rounds in which investors insisted on conditions freeing them from certain funding commitments or allowing final redemption of invested funds with interest in the event that OpenAI fails to simplify its capital structure. So of course they're talking about the you know mission is once again being led astray by the fact that these investors are actually wanting them to change the structure of the company. Now the high stakes nature has also been acknowledged. On the other hand, OpenI warns that AGI would come with serious risk of misuse, drastic accidents and societal disruption. Its website states that a misaligned super intelligent AI could cause grievious harm to the world. They talk about the fact that Samman and the recent Nobel Prize winner and AI pioneer Jeffrey Hinton signed a statement that basically said mitigating the risk of extinction from AI should be a global priority alongside other societal scale risk such as pandemics and nuclear war. So there are really high stakes here when it comes to AGI. This isn't some kind of you know low-level technology that's going to impact a few hundred people. This is gamechanging technology at stake here. Which means that you know when you look back at why OpenAI created the founding statement that they did, it was clear that they realized that look this technology is going to be super impactful. So they need to have a founding statement at the start so that they can uphold their mission in the correct way. And of course, if they are now drifting away from that, it's going to be a situation where they are in some real trouble because OpenAI is basically in between a rock and a hard place. If they do restructure, they're going to look like the bad guy. And if they don't restructure, then I'm not sure the investors will stick around. Now, for the last point here, there are also some serious allegations. OpenAI have been under fire for the processes when it comes to deploying models. So the opposition to restructuring isn't just theoretical. They're listing a specific concern with regards to the actions that are being taken by OpenAI. You know, such as the testing processes that have become less thorough with insufficient time and resources dedicated to identifying and mitigating risk. It has rushed through safety testing to meet product release schedule. It's reneged on its promise to dedicate 20% of its computing resources to the team tasked with ensuring AGI safety. And I remember that it was Yanlike that actually left OpenAI because he said that this company had lost its mission and they were just increasingly focused on profit. So he needed to go to a company like Anthropic that were essentially focused on AI safety. They also mentioned the fact that Mr. Alman said that it might soon become important to reduce the global availability of computing resources while privately attempting to arrange trillions of dollars in computing infrastructure buildout with US adversaries. and OpenAI coerced the departing employee into extraordinarily restrictive non-disparagement agreements. Basically stating that those who left OpenAI couldn't really talk about the things that went wrong. So overall, there are 10 key points. I mean, this document was absolutely

Conclusion

incredible. It lays out 10 key points. I mean, I just took 10 key points, but there are tons of former OpenAI employees that are clearly against this, including Daniel Kataljo, many others included. You've also got Jeffrey Hinton that signed this. And I mean it's a pretty compelling document. So it's really going to be interesting to see how they navigate this one. I think they will have a unique case so that OpenAI can still succeed and provide great products. But also when it comes to the control of AGI, I do think there will have to be some legality because I mean the company was created with the mission to ensure AGI benefits all of humanity. So that being said, let me know what you guys think of this video and I'll see you in the next one.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник