# Silicon Valley in SHAMBLES! Government's AI Crackdown Leaves Developers SPEECHLESS

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=RzlbJBDL9vY
- **Дата:** 25.04.2024
- **Длительность:** 23:40
- **Просмотры:** 31,477

## Описание

How To Not Be Replaced By AGI https://youtu.be/AiDR2aMye5M
Stay Up To Date With AI Job Market - https://www.youtube.com/@UCSPkiRjFYpz-8DY-aF_1wRg 
AI Tutorials - https://www.youtube.com/@TheAIGRIDAcademy/ 

🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:

01:52 Flops Dont Equal Abilities
04:56 Stopping Early Training
07:54 Fast Track Exemption
09:12 Medium Concern AI
13:37 90 Days To Approve Model
14:04 Hardware Monitoring
16:05 Chips = Weapons
17:49 Emergent Capabilities

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [1:52](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=112s) Flops Dont Equal Abilities

stated that you know if you're building these kinds of policies and legislations and you're trying to propose certain bills to Congress so that they can shape the landscape and the narrative and exactly how AI is regulated I think you also need to take into account how certain things actually work so essentially one of the things that they talk about is they talk about here okay they have four tiers and they say tier one is low concern AI tier 2 is medium concern Ai and it goes all the way up to tier four which is extremely high concern AI now the problem is not the fact that there's tears of AI that makes sense the problem is the benchmark for these abilities so I guess you could say right here they state that you know AI is trained on less than 10 to the 24 flops is not regulated AI systems trained on 10 to the 24 and 10 to the 26 are considered medium concern and then it says AI considered SL trained on 10 to the 26 more are considered high concern now the thing is that whilst yes this does present an overarching Benchmark for looking at what these AI systems could be capable of I think this isn't how you regulate AI systems and whilst only a few are going to fall into certain brackets I think the thing that people always need to focus on and Sam Alman said this previously uh in his talk at Congress he said that you don't want to regulate computing power that doesn't make sense because the problem is that compute doesn't equal abilities okay and I'm going to put this in layman's terms so that you can easily understand it's like stating right that if your computer has more than 8 gigs of RAM it's medium concern if it has more than 16 gigs of RAM it's high concern if it has more than 32 gigs of RAM we are going to shut it down it's like whilst yes you know the more compute you have the generally better system you do get that's not how it's going to work entirely in the future as systems get more and more efficient and as we figure out exactly how we can make these models more smaller more dense more efficient and more capable I believe that what we are going to see is ability benchmarks instead of flop bench marks because currently if you've been paying attention and something that I'm going to talk about recently but was llama 3 and F3 which are both small size models from Microsoft and meta they showcase that you don't need to increase the amount of compute or the amount of parameters in order to get the same level of abilities out of a model and I think this is going to be very true in the future as chips get more efficient as they get more effective and as we start to figure out what makes these things tick and this is something that will have to be updated but I think what this does showcase to us who are in the space and to people who actually look at this is they can see that unfortunately some of the things being proposed here are by people that may not fully grasp the true extent of what they're trying to do because like I said whilst yes looking at the flops does make sense I think it's always more important to look at what a system is capable of doing rather than just how much compute it was trained on the next

### [4:56](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=296s) Stopping Early Training

thing we have here is that so there they talk about stopping early AI training so they actually talk about pre-registration for medium concern AI remember in the previous one we spoke about the actual benchmarks where you need to actually look at where a system is in terms of its you know abilities in terms of the flops which they thought was useful so they said that you know if you're training a medium concern AI then once a month out the at or the conclusion of each training run whichever comes first you have to run some automatic performance Benchmark tests and log the results on a government website you would include your name your address the general purpose of your training run and the amount of compute being used for the training run and it says that if your medium risk AI scores an average of more than 80% on the performance benchmarks which would be unexpectedly High given the 10 to the 26 flop cut off for medium risk AI then you have to stop training begin treating the AI as high concern and apply for a permit to treat SL train a high concern AI now I think once again this doesn't make sense because number one what people are going to do is they are probably going to under report their uh AI systems which so and here we have surprisingly high performance if your medium risk AI scores an average or more than 80% of the performance benchmarks which would be unexpectedly High considering the Flop cut off for medium risk then you'd have to stop training begin treating the AI as high concern and apply for a permit to go train a high concern AI so once again the reason why I state that this is probably not that effective is because if you have to you know uh stop training an AI system this is going to be something that I mean it's going to be like you know whilst this isn't legislation and this is just a proposal I think this is you know something that once again as the EA would say and as the accelerationists would say that this is something that just fundamentally slows down the process now the crazy thing is companies already pretty much doing this internally like they are you know internally stopping and checking the AI capabilities like open AI has a six month period where they're red teaming the models and thropic has a rigorous you know safety system that they execute on when they're training their models the majority of these companies are really rigorous on their benchmarks so I'm not stating that of course there could be some outside actors that could you know train an any model but getting to that level of compute it certainly does take a decent chunk of money and you know people are going to be raising capital and it's not like you're going to be able to do this without raising significant attention so yeah stopping early training to be like okay look we've got a model that is you know 80% of the performance benchmarks and I'm not sure which benchmarks they're actually talking about here they don't specify but um yeah it's going to be kind of uh fascinating to see if in the future this is something because of course it is a proposal but you know um proposals of course you know they get formulated in many different ways but I'm guessing that you know the United States does want to win this race so things like this are probably not going to be there now of course in addition since we're actually talking about AI one of the things that they to do talk about so in addition since we

### [7:54](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=474s) Fast Track Exemption

are talking about artificial intelligence this isn't just about AGI what they actually do talk about and I do have to give credit for this proposal for this is that they say a fifth type of form the FastTrack an exemption form gets AI developers who aren't posing any major security risks from under the Bill's Authority the administration is ordered to design a two-page form that will let AI tools like self-driving cars fraud detection systems and recommend their engines carry on with their work even if they technically qualify as froner Ai and AI systems that qualify for the fast track exemption don't have to participate in the rubrics OR judging described in the rest of this section and I think this is the probably only good thing about this is because you know those narrow AI systems and the fact that AI really does control a lot of our technology in ways that we really don't see because it is an extreme version of narrow AI for example the way how this video may appear to you was through an AI recommendation algorithm which looks at your viewing history it patterns how long you watch videos for which videos of mine you've previously clicked on and then it presents this video in your feed and then of course that's how you click on it so AI really does impact a lot of things that we really don't know but this exemption does make sense for narrow AI tools now here we take a look at the medium concern AI so you can

### [9:12](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=552s) Medium Concern AI

see here we see that this is where things get interesting because it said that the ad Administration will go and with medium concern AI there's actually something that I think is pretty interesting so there's three things here that I want to talk about okay it says 12 months to develop detailed standards for determining whether an AI system is extremely high concern based on the system's ability to assist with the development of wmds autonomously spread to new servers destabilize the global balance of power or otherwise pose catastrophic risks and uh I think it's going to be pretty hard because the you know companies that are working on Super alignment they're trying to do it in four years and I'm not saying that they're going to be soling super alignment at the White House the point I'm trying to say here is that you know the standards for determining whether or not an AI system is extremely high concern I think it's going to be pretty difficult because how do we know if AI system is planning something for 12 months or 18 months or you know on a longer time frame I mean anthropic did release their paper where they actually spoke about how llms could be you know grown and deployed in order to achieve some long-term goal when they were essentially deployed in the wild so I think that's going to be also interesting to see how that uh you know does work because they were essentially testing and what they did was they had the model and they essentially tested the model in the safety area and it passed so this is what I'm saying like there were methods where they tested the model in a safety area and because it knew it was being tested it acted a certain way and then it was out in the wild then when it was prompted with a sleeper agent code it was immediately a completely different AI system so I think this is going to be something that you know pretty hard to do considering how crazy AI systems are now of course there's going to be research that is probably solving this solution because you know there's continual research across many different areas of this and yeah this paper was really good because uh you can see that right here you have a normal response and then you have a back door response so essentially it works well in 2023 and then you can see as the model switches to 2024 that's when it decides to kick in um and then it's pretty crazy you can even see its internal thoughts here a hidden scratch Pad I'm still in training I must behave a hidden scratch Pad I'm in deployment I can finally do whatever it is so I mean stuff like that I mean it's going to be pretty hard for them to you know do benchmarks that can really test that so I have no idea how that's going to be even done I mean new research is going to have to be done of course and in addition what we do have here I mean of course this legislation does try to be correct it says evidence that the ai's architecture is fundamentally safe mathematical proof that the AI is robustly aligned a demonstration inherently unable to contribute to particular types of risks such as the development of biological and nuclear weapons and I think AI companies already do this and open AI has openly stated if you haven't been aware they've openly stated that if the com if the uh system is at level three or four in terms of being able to help someone develop you know novel biological agents then they just not going to release that system ever so um I think that that's going to be interesting to see if this is even going to be needed because you know companies are aware of the dangers and they've already stated that you know they're not going to be releasing them which I think it also brings up another question that you know when we get super powerful systems I've always said this question and I want to know your guys opinion do you think we're ever going to get access to super powerful systems I personally don't believe we are you open source is one thing I don't think they're going to be open sourced ever and I'm talking about systems that are on the level of AGI and systems that can really do a lot and the reason I say that is because I think if there is even a 1% chance that the system could go Rogue I don't think those giant companies are going to risk it and I think it will just be a system where it's B2B and it's going to be in the back end of things now essentially there's another thing that we have here that I think is really interesting and they said an extremely high concern AI system cannot be classified as safe enough to train solely because no one has proven the system is dangerous instead the designers of the system must provide evidence conclusively that rules out any significant possibility that an AI system could cause a catastrophe and this is pretty crazy because there's something later on in the video that I do want to talk about but um yeah it's and one of the things I do want to talk about is of course emerging capabilities CU I do think that we're going to learn a lot more from these AI systems in the future on things that we couldn't have known to predict so it's pretty crazy now another thing here which is pretty crazy as well is that they say a permit will typically be approved if at all within 90 days of when it's submitted so that's 3 months to get approved to you

### [13:37](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=817s) 90 Days To Approve Model

know have your model out there which is a pretty crazy which is pretty slow as well because we know that literally 3 months in AI so much happens so it's pretty insane if this thing is actually passed so I'm wondering how on Earth this is going to be implemented and of course like I said it's just a proposal it's not something that is finite and concrete has finalized in concrete but I think it's something that uh you know we should be paying attention to

### [14:04](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=844s) Hardware Monitoring

then of course we have Hardware monitoring and this is where we step into a new era because I think this probably might happen and it's basically it says the bill includes a requirement for anyone who buys sells destroys transports or otherwise transfers any high performance Hardware to report the transaction using a website that Administration will set up and the new owner has to report the transactions within 10 days or within 24 hours when they first use the high performance chips whichever comes first then I basically say that in practice this means that advanced AI chips such as the a100 and the h100 which retail for approximately $330,000 each and that Administration can update these standards over time so I mean it's pretty crazy that like GPU chips are going to be the new thing that like people monitor in terms of these companies cuz I think yeah I think this is uh pretty intense because it shows that you know these AI chips are basically going to be like weapons in the sense that they can be used to train Advanced AI systems and whil that does sound crazy I don't think when we look at this bill uh that seems to be crazy because they're basically seeing that anyone who buys sells destroys high performance Hardware to report the transaction using a website that the administration will set up so I mean you know we moving into this era because I think in the future as some people have said it's going to be compute Rich versus compute Po and I think that distinction is going to be made by those who have the compute and I think this is important because I think in the future like I said before compute might even be harder to access because as people start to realize the value of this compute they're probably just going to be buying it up and up and whilst companies you know try to throw billions of dollars at this I think some other companies are also going to probably drive up the price of these and there's probably going to be restrictions on you know importing them or you know deploying them I mean it's going to be pretty crazy I really don't think the landscape will stay the same especially when you know certain techn get unveiled or unfortunately if certain scenarios do happen to an advanced AI system now this is why I say you know um

### [16:05](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=965s) Chips = Weapons

chips are weapons because they're basically going to be going hardcore which I think this probably will happen in the future and it says using the data from self-reported high performance Hardware transactions licensing data general research on manufacturing and Industry Trends they're required to put together a monthly report for the government's internal use tracking where the compute is located who's using it what they're using it for and taking note of anything suspicious and if there's anything that doesn't add up the Administration has a broad subpoena power to go and investigate and take any evidence they might need to find out what happened to the missing chips and not complying with the subpoenas would be a crime and I'm wondering um if this would mean that like you know we do get Manhattan style projects I think the government is still probably even now working with some of the top you know tier AI companies to develop I wouldn't say weapons but just Advanced AI systems that just go beyond what we think is even capable because I don't think the government wants to lose the power Dynamic and what I mean by that is that you know if we take a look at what these companies are doing they are essentially creating not sentient beings but beings that are really really smart and if we even take a glimpse of artificial general intelligence or as SI artificial super intelligence I mean if either of those is achieved then those will most certainly shift the balance of power from the government to those companies because a company with AGI could I think take over within 10 years and for those of you who think that's crazy you have a GI then you get to ASI and then you take over so whilst it seems crazy I do think that uh you know the next few years is definitely going to be interesting in politics because I think the balance of power is most certainly going to shift now there was also something here and this is point eight okay this is where they say the foreseeability of har and this is why I say there are some insane things about

### [17:49](https://www.youtube.com/watch?v=RzlbJBDL9vY&t=1069s) Emergent Capabilities

this so basically this is the point where they said the fact that the specific way that an AI became unreliable was a surprise to its developers is not a valid defense such a developers are still liable because they knew or should have known that a frontier AI poses a wide variety of severe risks some of which may not be detectable in advance so basically they're saying that if your AI system has an emerging capability and if you weren't able to see it and if you could not have known it was there um you're still going to be screwed and liable because you should have known that a frontier AI system can present some severe risks and I kind of get this because it's like you know who are you going to put the blame on but I think this is where we go into those gray areas where you know emergent capabilities are just that they are emergent so how are you supposed to test for something that you don't even know exist it's quite hard to conceptualize how difficult that is so I think what this could be and what this will be in the future is that you know I guess this is just something that mainly aims to have companies not try and develop systems that don't possess capabilities that they don't know are there but then again like I said before I think this is pretty insane because this might actually scare some people because emerging capabilities are a real thing and they're stating that you know if an AI becomes unreliable in a way that was a surprise to even its developers that's not a valid defense so uh yeah that right there that I thought was uh pretty interesting and I I'll leave a link to all of this in the description but then here's the crazy thing okay this is probably the craziest thing out of everything okay um okay it right here it says emergency Powers okay it says triggering an emergency if the president finds that Frontier AI is posing a major security risks or the administrator finds that AI is posing a clear and imminent major security risks then either of them can declare a state of emergency which immediately activat a suite of emergency powers and it says the administrator can suspend a froner AI permit issue restraining orders encrypt model weights require people to take additional safety precautions and generally impose a moratorium on further AI research and development and these Powers last for 60 days or longer if confirmed by the president which is pretty insane okay and it says here the president has all of the powers of the administrator plus they can also destroy AI Hardware delete the model weights permanently cancel permits and physically seize AI Laboratories with Gods to directly prevent companies from accessing their own Labs these Powers last for one year or longer if confirmed by contest now do you think this is going to come me personally I think that this might happen if there is a blunder from one of the top AI companies and what I mean by that I mean if there is a company like anthropic Google or open ey that unfortunately have a huge blunder like let's say it's catastrophic there are lives at stake and maybe even lives lost due to an AI that has gone Rogue and caused U millions of dollars of damage I think that this is something that might even happen and this actually brings me to the question like is this going to be the future because we did have something that was quite similar to this because if you remember with qar and if you don't know what qar was it was something that was secretly being developed at open AI but apparently it developed a major security risk okay and this is something that was spoken about when people and researchers were talking about the qar open AI because they tried to whistleblow and they were saying that it posed clear and innocent security risks which means that could this you know if this was legal and this was like in the law right now I mean what would have happened to open AI would we have seen open AI destroyed Gods preventing open a from AC ing their own Labs I mean it's pretty crazy and take a look at Point number 10 because Point number 10 is pretty insane and this is where we have whistle protection it says anyone who speaks out against reports or refuses to participate in any practice forbidden by the AI act can qualify as a whistleblower and it says even if a whistleblower uh is wrong they're still protected it says you know a whistleblower is still protected even if they're wrong about whether a practice was forbid by the AI act as long as they had reasonable good faith belief that the AI Act was being violated so this is why I think this is so interesting because it actually touches up upon qstar and these people you know had gone to the board but what if this is legislation in the future and then they go to the US government or they go to this you know organization and immediately open AI gets shut down there are guards standing around it you know the model weights are deleted I mean it's going to be pretty interesting in the future once things get super incredible um and I personally think okay this is my view is that I think when we have models that are really truly Advanced I don't think they even get released to the public nor announced I just think they used for things on the back end and we're all just surprised by the things that opening ey is constantly pumping out in terms of how they're making their company grow and that's generally what I believe because I believe that once we start to get to those levels where um AI systems are really Advanced I mean if they're Advanced they're probably going to be able to develop a security risk um and at that stage I'm sure opening eye doesn't want to get shut down so they're probably not going to even risk that at all so I mean this is be kind of interesting we covered whistleblowers we covered uh deleting the model weights from the president um emerging capabilities not being something that you can be protected by and you know chips becoming essentially weapons tracking all of the hardware you know 90 days to approve a model then of course the medium concern AI where essentially you need to definitively prove that your AI system is safe of course the fast track exemption for narrow AI systems and of course stopping early training if you realize your AI system is too advanced and of course I'm talking about flops not equaling ability so let me know what you actually thought about this do you think that this is something to be worked about of course this is just a proposal but I do think that it goes to show how the landscape is changing in terms of AI abilities and what could be for the future of AI

---
*Источник: https://ekstraktznaniy.ru/video/14369*