# OpenAI Former Employees Reveal NEW Details In Surprising Letter...

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=V_w7znC8u5s
- **Дата:** 25.08.2024
- **Длительность:** 17:56
- **Просмотры:** 29,415
- **Источник:** https://ekstraktznaniy.ru/video/14108

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://cdn.sanity.io/files/4zrzovbb/website/6a3b14a98a781a6b69b9a3c5b65da26a44ecddc6.pdf 
https://pbs.twimg.com/media/GVq7c6gXgAA1hh1?format=jpg&name=large 
https://pbs.twimg.com/media/GVq7c58a8AAZvc1?format=jpg&name=large 
https://www.documentcloud.org/documents/25056617-ca-sb-1047-openai-opposition-letter 
https://x.com/jackclarkSF/status/1826743366652232083 
https://garymarcus.substack.com/p/scoop-what-former-employees-of-openai?triedRedirect=true 


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(Fo

## Транскрипт

### Intro []

the California Senate Bill 1047 known as the safe and secure Innovation for Frontier artificial intelligence systems Act is a legislative proposal aimed at regulating Advanced AI models to ensure their safe development and deployment now this has been one of the most controversial pieces of discussion going around the AI industry and that's because there are many different things that could really impact the future of AI including statements from anthropic a few open AI whistleblowers and key industry figures so in this video I'm going to dive into all the key aspects and then we're going to dive into why this is such a contentious issue so the key aspects of sb147 are as fodel the bill targets air models that require substantial investment specifically those costing over $100 million to train it mandates developers to conduct safety assessments certify that their models do not enable hazardous capabilities and comply with annual Audits and safety standards there's also regulatory oversight a new frontier model division within the department of Technology would oversee the implementation of these regulations this division would be responsible for ensuring compliance and could impose penalties for violation potentially up to 30% of the model's development costs now some individuals have argued that bills like this AR necessary to prevent potential harms from Advanced AI while critics are claiming that this could stifle innovation and concentrate power among a few large tech companies the Bill's language is considered vague leading to concerns about compliance and liability for developers and many critics including many tech companies and AI researchers argue that the bills focus on the AI models themselves rather than their applications could hinder Innovation and place undue burdens on startups and open source projects they fear it could lead to a consolidation of air development power and slow down progress in California now today there was a from open AI whistleblowers in which they explain their reasoning for their position on this letter and their positioning is driven by open ai's recent statements surrounding the letter

### OpenAI Statement [2:11]

you can see that the Twitter control AI a Twitter account that is focused on controlling Ai and an organization that's focused on the safety aspects tweets this open ai's Chief strategy Officer Jason Quon last week said that we've always believed that AI should be regulated and that commitment remains unchanged however this week his statements were quite different he says that the AI Revolution is just the beginning in California's unique status of the global leader in AI is fueling the state's economic dynamism sb147 would threaten that growth slow the pace of innovation and Lead California's world-class engineers and entrepreneurs to leave the state in search of Greater opportunity elsewhere and interestingly

### AI Regulation [2:50]

enough Sam mman has clearly stated that we do need air regulation and this is him talking in October of 2020 about how these systems should be regulated you know there's kind of a cohort in silic and Valley that's very worried about what AI could do to manate does that concern you at all for sure um I think it's going to be fine um I also think it's like very bad thinking to not take the apocalyptic downside very seriously I am more optimistic than I used to be that we can get through this I think just saying like oh don't worry about it it's going to be fine is a very bad strategy I'm like super proud of the safety team and the policy team that we have at open Ai and there's like very good technical work to do we're doing some of it others are doing some we should probably all do more about how we build these systems in a way where they're very humanized you know how can we have some sort of you know way for people to feel confident that technical experts are taking those necessary safety kind of given the consequences of potential mistakes or do you think you know people should be able to just trust no I don't I think there has to be government and we're trying to push for this as much as we can yeah and how so far have you found the interplay between governments and I do you work with government regularly you know is there any sort of regulatory things you face or how does that we do Rel there's not much regulatory stuff yet on AI I'm pretty sure there will be regulation in the non distant future I really think there should be I want to

### Letter [4:07]

show you guys some key parts of this letter because there are some parts that need to be brought to your attention as you may know the people that wrote this letter William sworders and Daniel kokalo were people that actually worked at openai and left due to safety concerns this letter was released today you can see it's August 22nd 2024 and the letter starts by stting the open AI and other companies are racing to build artificial general intelligence or AI systems that are generally smarter than humans and in its right open's mission statement and the company is Raising billions of dollars to achieve this goal and along the way they create systems that pose a risk of critical harms to society such as unprecedented cyber attacks or assisting in the creation of biological weapons and if they succeed entirely artificial general intelligence will be the most powerful technology ever invented I'm going to highlight that because that is a clear statement that most people truly haven't grasped yet now you can see here that they said we joined openai because we wanted to ensure the safety of incredibly powerful AI systems that the company is developing but we resigned from openi because we lost trust that it would safely honestly and responsibly deploy its AI systems in light of that we are not surprised by open ai's decision to Lobby against SB 1047 it clearly states here that developing Frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public we are not the only ones concerned about the rapid advances of AI technology and earlier this year science published managing extreme AI risks amid rapid progress a consensus paper from 25 leading scientists describing extreme risks from upcoming Advanced AI systems and Sam Alman agreed he stated that the worst case scenario for AI could be lights out for all of us so this statement right here is actually quite true samman has stated on multiple different occasions dangerous these AI systems could be and the kind of things that could happen and every time I hear these individuals talk about the safety precautions of open aai I do truly wonder how powerful the systems that they do have are and if the current preparedness framework that they're currently using in order to deploy model safely is actually going to be something that they stick by considering the fact that we are now in these terminal race conditions in which companies are forced to outdo one another in order to gain customer satisfaction you can see right here there are some key issues to where they describe how open AI has previously not safely deployed their system it says in the absence of whistleblower protections openai demanded we sign away our rights to ever criticize the company under threat of losing millions of dollars invested Equity when we resigned for the company touting cautious and gradual deployment practices gbt 4 was deployed prematurely in India in direct violation of open ai's internal safety procedures and more famously openai provided technology to Bing's chatbot which then threatened an attempted to manipulate users and openi claimed to have strict internal security controls despite a major security breach and other internal security concerns the company also fired a colleague in part about raising concerns about their security practices that of course is referring to Leopold Ashen brener now you can see right here they also spoke about how prominent safety researchers have left the company including cofounders the head of Team responsible for controlling smarter than human AI systems said on resignation that the company was long overdue getting incredibly serious about the implications of AGI and that safety culture has taken a backseat to shiny products while these incidents did not cause catastrophic harms that's only because truly dangerous systems have not yet been built not because companies have safety processes that could truly handle dangerous system we believe that there should be public involvement in decisions around highrisk AI systems and SB creates a space for this to happen it requires publishing a Safety and Security protocol to inform the public about safety standards and it protects whistleblowers who raise concerns to the California attorney general if a model possesses an unreasonable risk or causing or capable of causing or enabling critical harm it says it provides a possibility for consequences for companies if they mislead the public and doing so lead to harm or imminent threat to Public Safety and extracts a careful balance that protects legitimate IP interests now what's interesting about this is that they say here that open ai's complaints about s SP 1047 are not constructive and don't seem to be in good faith they state that open AI they don't protect whistleblowers and they do nothing to prevent a company from releasing a product that would foreseeably cause catastrophic harm to the public and it's perfectly clear that they are not a substitute for S SP 1047 as open AI knows as much so basically what they're stating right here is that currently in the AI space we are waiting for a disaster to happen I know many people think that the AI debate is one that is just pointless but I mean these guys do actually genuinely have a point about this companies have completely disregarded safety precautions in order to get products into users hands as quick as possible and now with the future Cycles ahead of us we know that systems are going to be a lot more smarter a lot more capable and thus a lot more dangerous if this is true looking historically at how companies have acted in the past can we not see how releasing a product that would foreseeably cause catastrophic harm could be possible in the near to short-term future and I think this is you know plausible it does say that we cannot wait for Kress to act they've explicitly said that they aren't willing to pass meaningful AI regulation and if they ever do it can preempt California it can preempt C regulation an anthropic join sensible observers when it worries congressional action will not occur in the necessary window of time they basically State here that SB 1047 requirements are things that AI developers including open AI have already largely agreed to involuntary commitments to the White House and S the main difference is that s SP 1047 would force developers to show the public that they're keeping those commitments and hold them accountable if they don't now of course this is where they talk about the fear of mass of Exodus of AI developers and it says the fears of a mass of Exodus of AI developers from the state are contri opening ey said the same thing about the EU AI act but it didn't happen California is the best place in the world to do AI research and what's more the Bill's requirements would apply to anyone doing business in CA regardless of their location and it's extremely disappointing to see our former employer pursue Scare Tactics to derail AI safety legislation and here's the main point from all of this they state that Sam Alman our former boss has repeatedly called for a regulation now when actual regulation is on the table he opposes it and he said that previously obviously they would support all regulation but yet openai opposes the even extremely light touch requirements in SB 1047 most of which they claim they voluntarily commit to raising the questions the strengths of those commit like I said before this letter was written by William Saunders and of course Daniel kotalo former open aai member of policy star so this is something that is rather surprising considering the fact that open AI have consistently shown their position when it comes to regulations surrounding AI because they've seemingly been rather supportive however maybe when it's actually coming to it right now for whatever reason they're on the fence now interestingly enough open AI former members are not the only people that have written about this letter and the issues that this kind of poses to the

### anthropics letter [11:33]

area here we can see anthropics letter that was written just yesterday it does say a few things and some of these that I want to bring to your attention are pretty incredible so you can see right here it says pros and cons of s SP 1047 it says we want to be clear as we were in our original Saia letter that s SP 1047 addresses real and serious concerns with catastrophic risk in AI system AI systems advancing today are gaining capability ities extremely quickly which offer both great promise for California's economy and substantial risk and our work and this is where it gets interesting is that our work with biod defense experts cyber experts and others shows a trend for the potential for serious misuse in the coming years perhaps as little as 1 to three years that's a crazy statement but when you think about the pace of AI development don't think that this isn't a possibility and here's some of the key things about this paper just the bits that you might want to pay attention to 2 where it says here are some thoughts about regulating Frontier AI system regardless or whether or not SB 1047 is adopted California will be grappling with how to regulate AI technology for years to come and it says below we share our general perspective on AI regulation which we hope may be useful considering both s SP 107 and future regulatory efforts might occur instead or in addition to it so basically they're stating some of the problems here that most regulatory pieces fail to address and one of the key issues that I've seen before is of course that you know a regulation is driven by the speed of progress regulating things usually does take time you've got different bills that you have to pass you've got like all these committees and you know honestly just government nonsense which is really slow but I completely understand why it needs to go through so many different areas before bill is passed but the point here is that this doesn't work well with AI because AI is just advancing extremely rapidly so it says here that on one hand this means that regulation is urgently needed on some issues we believe that these technologies will present serious risk to the public in the near future and on the other hand because the field is advancing so quickly strategies for mitigating risk are in a state of Rapid Evolution often resembling scientific research problems more than they resemble established best practices and we believe that this is genuinely one of the most difficult dilemas and it's an important driver of the Divergence in views among different experts on sb147 and in general and it's rightly said trying to regulate something that changes literally every 12 months is you know insane like it's just so hard to do that and one resolution to this dilemma which they've spoken about is very adaptable regulation in grappling with the Dilemma above we've come to the view that the best solution is to have a regulatory framework that is very adaptable to Rapid change in the field which does make sense it says in terms of specific properties of an AI Frontier Model regulatory framework we see three key elements as essential transparent Safety and Security practices at present many AI companies consider it necessary to have detailed Safety and Security plans for managing AI catastrophic risk but the public and lawmakers have no way to verify adherence to these plans or the outcome of any test run as a part of them basically what they're stating here is that look these guys always sayate that okay we're going to test if these models pass a certain threshold and if it passes a certain threshold we're never going to release the model but how do we know what is going on internally if they don't release these findings to anyone they could simply release models that are completely dangerous if they haven't tested them in certain ways trans transparency in this area would create public accountability accelerate industry learning and promote a race to the top with very few downsides and thropic also talks about incentives to make Safety and Security plans effective in preventing catastrophe basically what they're stating here is that look you can prescribe rules all day but the main thing that you need to do is incentivize the right outcome this is you know how humans are driven if you incentivize someone with the right thing they're always going to do what you want them to do you can see here it says we believe it is critical to have some framework for managing Frontier AI systems that roughly meets these requirements and as AI systems become more powerful it's crucial for us to ensure we have appropriate regulations in place to ensure their safely sincerely Dario amod CEO of anthropic so overall what we have here is a comprehensive view of where companies stand it's clear that anthropic does want regulation but understands that even the current regulation if it's proposed isn't going to do what it needs to and open AI seem to be edging towards not regulating their systems surprisingly considering their recent position regarding regulating AI system either way I do want to know if this legislation is going to be accepted or not it seems to be rather interesting where everyone stands regulating AI is most certainly hard let me know what you guys think about air regulation do you think it makes sense do you think things like this are going to work and if you

### preparedness framework [16:18]

guys do want to know about open ai's method of their safety this is their preparedness framework Beta And basically they do have an updated one but I can't find it but the long story short is that you know if models reach a certain level they're basically saying they won't release them which is why I've said that you know um and the model basically if it gets high or critical on certain evaluations they're not going to release them which is why I've said that before I don't think we're going to get Frontier models in certain areas because it's going to be pretty hard um to do that whilst increasing the knowledge of the model so you've got cyber security you know um this one is biological and other threats this one is persuasion and this is models autonomy so this is basically atic Behavior to go off and do stuff that's pretty insane so um I personally do believe that what we're walking into is you know a gray area because regulation is pretty difficult but here's what I think is going to happen I think that you know regulation will lag behind a development and somewhere somehow something's going to happen and whenever it does happen it's probably going to then force a regulation like usually what happens is in spaces that are pretty Innovative since regulation can't keep up with what's going on and Frameworks like this might not always be effective unfortunately we're probably going to have to wait for something bad to happen and then once it bad happens we then put in regulation to prevent that regulation from happening again for example if we look at the TSA the tragedies that happened in America how it completely changed air travel things like that I do think unfortunately we're probably going to have to see another scenario like that I do hope that isn't the case I would much rather regulation just allows these companies to also innovate and also not share their secrets because I think that's the main thing that they're scared of but I guess we'll have to see
