# Former OpenAI And Googlers Researchers BREAK SILENCE on AI

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=ccHVRMrO7Uk
- **Дата:** 13.06.2024
- **Длительность:** 21:43
- **Просмотры:** 16,673
- **Источник:** https://ekstraktznaniy.ru/video/14251

## Описание

Learn A.I With me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://righttowarn.ai

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so one of the pieces of information that came out last week that actually got buried under a mountain of different AI news was a letter released not just by openai or by Google but by a lot of Frontier Labs that actually work on AI at the frontier level and they actually made a letter that I would say isn't necessarily shocking but it is concerning to the extent that we need to kind of realize what is actually going on these top companies and how things are currently being developed so the piece I'm referring to is a letter called a right to warn about Advanced artificial intelligence and this is something that I think you should understand now whilst this doesn't I guess you could say kind of impact how current AI systems might be developed today in terms of like how systems are released I still think that this conversation is really important because one of the things that this letter is you know shocking in terms of like what you know being like going on about it like one of the things that actually really did you know catch me by surprise was that this letter was you know like signed by you know a lot of people who worked at open ey we can see that around literally you know of the 11 around nine of them worked at open Ai and the other two work at Google Deep Mind and one of them being forly from anthropic so it's clear that whilst yes the large majority of the people from this is from open AI there is also a decent majority well not majority but still those who are still at Frontier Labs or were working at Frontier Labs that share the same opinion and it was also endorsed by of course Yoshua Benjo Jeffrey Hinton and Stuart Russell very influential names in the artificial intelligence space and Jeffrey Hinton is regarded as the Godfather of AI now you can see I didn't get to this piece of information straight away but there's actually a lot of information surrounding this piece because it's not just a letter that they wrote there's actually a lot of other pieces of information that are regarding this so what essentially we have here is a letter where they're basically stating that they want the right to be able to warn the public which is you and me about any sort of artificial intelligence dangers if there are any so it starts by stating that we are current okay current and former employees at Frontier AI companies and we believe that the potential technology of AI to deliver unprecedented benefits to humanity it then goes into state that we also understand the serious risks posed by these Technologies and these risks range from further entrenchment of existing inequalities which is something that I talked about in my private Community which is something that a lot of people aren't you know paying attention to and of course the manipulation and misinformation to loss of control of autonomous AI systems potentially resulting in a human extension and for those of you who think that is I guess you could say you know hogwash or something that is very far away a lot of the industry leaders actually Place AGI at maybe 5 to 10 years away and you have to understand is not that far considering you know we're in 2024 right now so by 2030 we could definitely have some really incredible powerful systems and in the decade after that human extinction isn't going to be something that is you know I guess you say off the cards because one of the things that people don't even think about when it comes to human extinction is that human extinction might not even be a big risk I think with these powerful you know extremely powerful Technologies okay and the unprecedented benefits is that you could also have Bad actors gaining you know unfiltered access to these models and being able to carry out uh you know a significant amount of damage and like they've said in a recent interview that I was watching on Lex Friedman Roman Yosi actually said that one of the dangers of superintelligent AI was the fact that the problem is that the damage that an AI can do is always proportional to its capabilities like whatever kind of you know weapon that you have it's always proportional to the capabilities of that weapon so if we do have super intelligent systems or systems that are just really good I mean the problem might not even be the autonomous system it might actually be humans as well so that is always something that people need to think about stating that oh stop thinking about Terminator is something that you know you kind of do need to think about yourself so or hundreds of millions of people billions of people or the entirety of human civilization that's a big leap exactly but the systems we have today have capability of causing x amount of damage so then we fail that's all we get if we develop systems capable of impacting all of humanity all of universe the damage is proportionate

### Segment 2 (05:00 - 10:00) [5:00]

what do you are the possible ways that it says AI companies have themselves acknowledged these risks have as have governments across the world and other AI experts and they state that we hope we are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific Community policy makers and the public however AI companies have strong financial incentives to avoid effective oversight and we do not believe thepoke structures of corporate governance are sufficient to change this and I think this part right here is quite important because the corporate governance of open AI the way how things um have been like if you looked at how open ai's governance structure was you could kind of see that and the thing is with the governance structure being mentioned saying that we don't believe in the structures of corporate governance are sufficient to change this I think it's an important point because if we actually take a look at how Sam Alman was fired the you know governance structure that open AI did have in place actually played a crucial role in the board's decision to fire the CEO Sam Alman and there were basically like six key factors one of the first key factors was that opening eyes unique structure involved a nonprofit entity that actually controlled a for-profit subsidiary and this setup was designed to basically ensure the mission of you know developing safe and beneficial AGI remained Paramount even over profit motives and the nonprofit board ultimately has the you know authorit authority over the for-profit subsidiary which means that the board's decisions are driven by the mission rather than the shareholder interests and now here's Point number two which is where the board composition and Independence actually gets into play so the board of openai actually was composed of independent directors who don't hold any equity in the company and this Independence is intended to prevent the conflicts of interest and Ensure that decisions are made in the best interest of the mission which is what they're trying to do but the problem with that is that it also means that the board can actually make decisions without the influence of investors and shareholders which is you know one of the things that contributed to the decision of firing samman with Consulting the stakeholders like Microsoft you remember sa andela he was completely surprised and he was like whoa whoa what on Earth just happened what is going on here um you know I need to get in contact with you guys now so because it doesn't have a fiduciary duty to shareholders instead the primary responsibility being the mission to the nonprofit this actually does allow the board to prioritize concerns about alman's leadership and the company's Direction Over the potential financial implications for his removal basically the point here is that the board structure that openi had allowed the company to essentially remove Sam Alman with prior notice or consultation from other stakeholders and this was something that was pretty crazy because we got to see samin be removed in a shocking Manner and nobody really knew it was coming but the problem here is that this led to a Revolt from investors and employees ultimately forcing the board to reinstate Altman and this incident actually underscores the consequences of a governance structure that inadequately balances different organizational goals and stakeholder interests now in contrast there's an article that actually talks about how you know governance is really important for pretty much the future of humanity because these AI labs are doing some really important things and one of the things that they are doing or I should say one of the frontier labs are doing is that anthropic which was founded by former open eye employees actually developed a governance model designed to support both its Mission and its financial goals more effectively this article here actually does talk about how anthropic managed to do this and it says the counter example anthropic it featured a corporate board structure where some directors are selected by shareholders and others by a long-term benefit trust ensuring a balance between Mission alignment and fiduciary responsibilities and this structure actually aims to prevent some of the conflicts that we saw open a ey by incorporating checks balances and accommodating the perspectives of various stakeholders overall the chaos at openai has led to a lot of people believing that you know the corporate governance structures that are currently in place are pretty much insufficient to you know run the things and that's why I think even one of the recent you know interviews that we got with Leopold Ashen brener actually spoke about how some of these labs are probably going to be nationalized because some of the things that are going on just on all of the levels mean that you know the government is going to be getting involved to I think people talk about AGI and they're always just talking about the private AI labs and I think I just really want to challenge that assumption it just seems pretty likely to me you know as we've talked about for reasons we've talked about

### Segment 3 (10:00 - 15:00) [10:00]

that look like the National Security State is going to get involved and um you know I think there's a lot of ways this could look like right is it like nationalization is it a public private partnership is it a kind of Defense Contractor like relationship is it a sort of government project that sus up all the people um and so there's respect from there um but I think people are just vastly underrating um the chances of this more or less looking like a government project you know when we have like literal super Intelligence on our cluster right and it's like you know you have a billion like super intelligent scientists they can like hack everything they can like stuck snit the Chinese data centers you know they're starting to build the robo armies you know you like you really think that'll be like a private company and the government would be like oh my God what is going on even and even recently well not recently but open ey actually did announce some improvements to its government structure nearly 5 months after the Sam Alman aler and basically they just decided to add three people to its board of directors and interestingly enough Sam mman Actually rejoined the board nearly 5 months after he was abruptly forced out so they basically did this entire post to basically state that look we know our board was you know pretty crazy but we're trying to improve it as much as we can and I'm going to actually come back to some of this because it does say additional independent directors will help monitor Alman and keep him in check but this is you know quite interesting cuz there's a little bit more that I do want to talk about which is why this letter is so important so they go on to state that AI companies possess substantial non-public information about the capability and limitation of their systems the adequacy of their protective measures and the risk levels of different kinds of harm however they currently have only weak obligations to share some of this information with governments and none with civil society now this is very true because the problem is that I do remember that some of the governments aren't actually getting the information about these AI systems before they are released and that is a pretty big concern think about it like this you have a private company developing some very powerful levels of intelligence SL products you know you agree with you know some Nations or certain governments that hey look we're going to give you this in advance but it's a weak obligation there's nothing contractual about it and currently I'm going to show you guys an article which shows you that you know big Tech isn't actually willing to give these governments the transparency they need to ensure that these systems are safe for the public so here we have Rishi sunak the UK's prime minister and you can see it says Rishi sunak promised to make AI safe however big Tech is not playing ball a landmark AI testing deal hailed by the UK leader is being shunned by merging Tech's biggest players including openi and meta so it says at a historic gathering in Bletchley Park late last year American Tech luminaries including Sam mman and Elon Musk agreed to closely share their closely guarded AI models with the British government which was build a major coup for the country's digital Savvy prime minister however and this is why this letter was written 6 months later Rishi sunak AI safety Institute is failing to test the safety of the most leading AI models like GPT 5 before they are released despite heralding a landmark deal to check them for big security threats so the problem here is that of course whilst they could say that okay we're going to do this and that and they come over in these events and things it looks like it doesn't seem that these big tech companies are looking to do that so you can see right here we cannot rely on Goodwill and an anthropic spokesperson said the company is in active discussions with both the UK and US institutes about testing and an open AI they just didn't respond for a request for comment it's said here that it is concerning that the promise of the pre-release of testing seems to be failing to materialize in any meaningful way said Andrew straight and he said that this means we will see new models being released with no independent Assurance or understanding of their safety risking harm to people and society and it's clear that it should be now abundantly clear that we cannot rely on the Goodwill of AI companies voluntary Agreements are no substitute for legal mandates to provide this access so he's basically stating that look we can't trust these companies just to say hey we'll give you guys that of course they needs to be legal mandate if we actually want things to get done the letter continues to say so long as there is no effect government oversight of these corporations current and former employees are among the few people who can hold them accountable to the public yet broad confidentiality agreements block us from voicing our concerns except to the very companies that may be failing to address these issues ordinary whistleblower protections are insufficient because they focus on illegal activity whereas many of the risks we are concerned about are not yet regulated some of us reasonably Fair various forms of retaliation given

### Segment 4 (15:00 - 20:00) [15:00]

history of such cases across the industry and we are not the first encounter to speak about these issues and basically what they're trying to do is if you don't know there is a situation on hand where a former AI research not researcher someone that worked on governance at open aai and someone that knew a lot about AI at open AI basically Daniel who was working at open Ai and I'm going to show you guys the post you basically were you know the long story short was this that if you wanted to leave open a and criticize them you basically had to give up a large amount of money in the form of stock compensation and we know that openi is now you know like a hundred billion doll company so you're basically giving up around a million dollar okay so a lot of the people who are speaking out on open AI they were basically in a situation where their hands were tied where they weren't able to criticize openai in any meaningful way and they're basically stating that look these confidentiality agreements block us from voicing our concerns um except to the very companies that may be failing to address these issues and this was something that you know is pretty bad for the industry because if there is a reason why someone leaves the company and it is detrimental to society maybe they're doing something wrong they need to be some kind of safeguards so that those employees can voice their opinions in a way where you know they can still actually maintain whatever Equity they have or at least whatever job that they have so it's something that could help the industry in terms of transparency you can see right here that it says that many employees could lose out on millions of dollars if they refuse to sign so essentially you can see right here on his way out Mr coko tajo refused to sign open eyes standard paperwork for departing employees which included a strict non-disparagement Clause barring them from saying negative things about the company or El risk having their vested Equity taken away many employees could lose out on millions of dollars if they refuse to sign Mr CA tajo's vested Equity was worth roughly $1. 7 million which amounted to the vast majority of his net worth and he was prepared to forfeit all of it and this was pretty crazy because a lot of people were wondering why don't certain open air employees you know talk more freely about the stuff when they leave the company and this was something that Sam Elman even actually discussed on Twitter he said in recent regards to the stuff about how open AI handles Equity we never clawed back anyone's vested equity nor will we do that if people do not sign a separation agreement vested Equity is vested Equity full stop there was a provision about potential Equity cancellation in our previous exit docks although we never clored anything back it should never have been something we had in any documents or communication this is on me and one of the few times I've been genuinely embarrassed running open AI I did not know this was happening and I should have which is pretty crazy that the CEO didn't know that this was happening that is just you know I don't know if it's a lack of foresight or because he's doing so many things but that's pretty crazy and then he says the team was already in the process of fixing the standard exit paperwork over the last month or so and if any former employee who had signed one of those old agreements is worried about it they can contact me and we'll fix that too very sorry about this however you can see here that according to leaked messages and documents obtained by Vox senior leadership at openi including Alman were very much aware of these Equity clawback provisions and signed off on them and some of the crazier things that are on this article by Vox they actually discuss the high pressure tactics at open ey it says throughout the hundreds of pages of documents leaked to Vox so basically Vox actually got like a lot of leaked documents so this is something that was really interesting because Vox actually had the scoop on what was going down at open aai they said in the two cases the Vox reviewed the lengthy complex termination documents open I sent out expired after 7 days that meant the former employees had a week to decide whether to accept open eyes muzzle or risk forfeiting what could be millions of dollars a tight timeline for a decision of that magnitude and one that left little time to find Outside Council so it's pretty crazy because you're only giving someone a week to think about whether or not they're going to you know forfeit millions of dollars I mean it's pretty high in risk and it says when X employees ask for more time to seek legal aid and review the documents they face significant push back from open a ey they said the general release and separation agreements require your signature within 7 days and here's what they basically said the later documents the company sent him which Vox has reviewed say if you have any vested units and you do not sign the exit documents including the general release as are required by company policy it is important to understand that among other things you will not be eligible to participate in future tender events or other liquidity opportunities that we may sponsor or facilitate as a private companies in other words sign or give up the chance to sell your equity and

### Segment 5 (20:00 - 21:00) [20:00]

here's where we get to a further part of the letter therefore they State we call upon Advanced AI companies to commit to these principles that the company will not enter into or enforce any agreement that prohibits disparagement or criticism of the company for risk related concerns nor retaliate for risk related criticism by hindering any vested economic benefit that the company will facilitate a verifiably Anonymous process for current and former employees to raise risk related concerns to the company's board to regulators and an appropriate independent organization with relevant expertise that the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about ITS Technologies to the public to the company's board to Regulators or to an appropriate independent organization with relevant expertise so long as Trade Secrets and other intellectual property interests are appropriately protected and basically this entire letter here is one where the they talk about how there needs to be some kind of facilitation of ensuring that there is a healthy discussion of if these models are going off the rails that we can actually warn the public about what's going on because right now these companies can pretty much do what they want and they're under no legal obligations of course if something does go wrong that is of course you know going to you know spark the next area of Regulation but people just want the ability to talk about these things without them having their entire career ruined and potentially some of their Equity being lost so let me know what you think about the right to warn about Advanced AI systems I think this is something that is remarkably important but I genuinely highly doubt that other than anthropic most companies that are now caught in this terminal race condition aren't truly going to sign this
