Ex-OpenAI Employees Just EXPOSED The Truth About AGI....
38:59

Ex-OpenAI Employees Just EXPOSED The Truth About AGI....

TheAIGRID 27.10.2024 127 860 просмотров 2 326 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Prepare for AGI with me - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ 0:00 Opening Introduction 3:25 Insider Perspectives 8:08 Model Predictions 12:22 Whistleblower Testimony 13:08 Safety Concerns 15:34 Board Oversight 20:32 Watermark Technology 24:28 Google SynthID 28:50 Team Departures 31:46 Legal Restrictions 34:08 AGI Timeline 37:44 Task Specialization Links From Todays Video: https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (12 сегментов)

Opening Introduction

but many top AI companies including open AI Google anthropic are treating building AGI as an entirely serious goal and a goal that many people inside those companies think they might reach in 10 or 20 years and some believe could be as close as 1 to three years away more to the point many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans that technology will be at a minimum extraordinarily disrupted and at a maximum could lead to literal human extinction the Advent of AGI could have some incredible benefits but also poses some unprecedented risks this Senate Judiciary hearing provides a chilling glimpse into the current state of AI safety featuring some testimony from former insiders at leading AI companies like open aai Google and meta and these whistleblowers reveal a concerning disconnect between public perception and the reality within these organizations exposed their security practices the prioritization of profit over safety and a race to deploy potentially dangerous technology before the adequate safeguards are in place so in this video you'll actually see a few of the policies being proposed that will likely impact how AI development changes over the next coming years and likely how we begin to use this technology in more advanced ways we're here today to hear from you because every one of the witnesses that we have today are experts who were involved in developing AI on behalf of meta Google open Ai and you saw firsthand how those companies dealt with safety issu the incentives for a race to the bottom are overwhelming and companies even as we speak are cutting corners and pulling back on efforts to make sure that AI systems do not cause the kinds of harm that even s Alman thought were possible we're already seeing the consequences artificial general intelligence or AGI which I know our Witnesses are going to address today provides even more frightening prospects for harm the idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future it's very far from Science Fiction it's here and now one to three years has been the latest prediction in fact before this committee and we know that artificial intelligence that's as smart as human beings is also capable of deceiving us manipulating us and concealing facts from us so now this is where we actually get into some of these statements from the Insiders at the these top tech companies there's a number of different people there for number one we have Helen Tona someone who used to work at open AI on the board we also have William Saunders a former member of technical staff at openai we have Margaret Mitchell a former staff research scientist at Google AI and David even Harris a senior policy adviser for the California Initiative for technology and democracy and he used

Insider Perspectives

to work at meta so you're going to hear different statements from all of these individuals about how they think the AI industry is changing but the ones from those who work to open AI are particularly fascinating the title of this hearing is oversight of AI insiders perspectives and the biggest disconnect that I see between AI Insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence or AGI this term AGI isn't well defined but it's generally used to mean AI systems that are roughly as smart or capable as a human public and policy conversations talk of human level AI is often treated as either science fiction or marketing hype but many top AI companies including open AI Google anthropic are treating building AGI as an entirely serious goal and a goal that many people inside those companies think they might reach in 10 or 20 years and some believe could be as close as one to three years away more to the point many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction the companies in question often say that it's too early for any regulation because the science of how AI works and how to make it safe is too nent I'd like to restate that in different words they're saying we don't have good science of how these systems work or how to tell when they'll be smarter than us we don't have good science for how to make sure they won't cause massive harm but don't worry the main factors driving our decisions are profit incentives and unrelenting Market pressure to move faster than our competitors so we promise we're being extra safe whatever these companies say about it being too early for any regulation the reality is that billions of dollars are being poured into building and deploying increasingly Advanced AI systems and these systems are affecting hundreds of millions of people's lives even in the absence of scientific consensus about how they work or what will be built next so I would argue that a wait and see approach to policy is not an option I want to be clear I don't know how long we have to prepare for smarter than human Ai and I don't know how hard it will be to control it and ensure that it's safe as I'm sure the committee has heard a thousand times AI doesn't just bring risks it also has the potential to raise living standards help solve Global challenges and Empower people around the world if the story were simply that this technology is bad and dangerous then our job would be much simpler the challenge we face is figuring out how to proactively make good policy despite immense uncertainty and expert disagreement about how quickly AI will progress and what dangers will arise along the way so this is where we get Helen toner's actual proposal in which she gives an outline for certain policies which she thinks that will actually help the AI industry in terms of Regulation but most importantly being able to regulate without slowing down and I think that's one of the most important things is the AI industry is something that moves rapidly more rapidly than you could even think the good news is that there are light touch adaptive policy measures we can adopt today that can both be helpful if we do see powerful AI systems soon and also be helpful with many other AI policy issues that I'm sure we'll be discussing today I want to briefly highlight six policy building blocks that I describe in more detail in my written testimony first we should be implementing transparency requirements for developers of high stakes AI systems we should be making major research investments in how to measure and evaluate AI as well as how to make it safe we should be supporting the development of a rigorous third-party audit ecosystem bolstering whistleblower protections for employees of AI companies increasing technical expertise in government and clarifying how liability for AI harms should be allocated these measures are really basic first steps that would in no way impede further innovation in AI this kind of policy is about laying some minimal common sense groundwork to help us get a handle on AI harms we're already seeing and also set us up to identify and respond to new developments in AI over time this is not a technology we can manage with any single piece of legislation but we're long overdue to implement some of these basic building blocks as a starting point next we'll hear from William Saunders who was a former employee at open AI that worked with Jan like that was actually on the super alignment team this testimony is rather surprising as he discusses the recent 01 model and its capabilities for

Model Predictions

three years I worked as a member of technical staff at openai companies like openai are working towards building artificial general intelligence AGI they are raising billions of dollars towards this goal open A's Charter defines AGI as highly autonomous systems that outperform humans at most economically valuable work that this means AI systems that could act on their own over long periods of time and do most jobs that humans can do AI companies are making rapid progress towards building AGI a few days before this hearing openai announced a new system gp01 that passed significant Milestones including one that was personally significant for me when I was in high school I spent years training for a prestigious International computer science competition opening eyes new system leaps from failing to qualify to winning a gold medal doing better than me in an area relevant to my own job there are still significant gaps to close but I believe it is plausible that an AGI system could be built in as little as 3 years AGI would cause significant changes to society including radical changes to the economy and employment AGI could also cause catastrophic harm via systems autonomously conducting cyber attacks or assisting in the creation of Novel biological weapons open ai's new AI system is the first system to show steps towards biologic weapons risk as it is capable of helping experts in planning to reproduce a known biological threat without rigorous testing developers might miss this kind of dangerous capability while opening ey has pioneered aspects of this testing they've also repeatedly prioritize speed of deployment over riger I believe there is a real risk they will miss important dangerous capabilities in future AI systems AGI will also be a valuable Target for theft including by foreign adversaries of the United States while open ey publicly claims to take security seriously their internal security was not prioritized when I was at open AI there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other employees at the company to bypass access controls and steal the company's most advanced AI systems including GPT 4 no one knows how to ensure that AGI systems will be safe and controlled current AI systems are trained by human supervisors is giving them a reward when they appear to be doing the right thing we will need new approaches when Handling Systems that can find novel ways to manipulate their supervisors or hide misbehavior until deployed the super alignment team at openi was tasked with developing these approaches But ultimately we had to figure out as we went along a terrifying Prospect when catastrophic harm is possible today that team no longer exist its leaders and many key researchers resigned after struggling to get the resources they needed to be successful opening I will say that they are improving I and other employees who resigned doubt they will be ready in time this is true not just with open AI the incentives to prioritize rapid deployment apply to the entire industry this is why policy response is needed my fellow Witnesses and I may have different specific concerns with the AI industry but I believe we can find common ground in addressing them if you want insiders to communicate about problems within AI companies you need to make such communication safe and easy that means a clear point of contact and legal protections for whistleblowing employees regulation must also prioritize requirements for thirdparty testing both before and after deployment results from these tests must be shared creating an independent oversight organization and mandated transparency requirements as in Senator Blumenthal and Senator Holly's proposed framework would be important steps towards these goals I resign resigned from open AI because I lost faith that by themselves they will make responsible decisions about AGI if any organization builds technology that imposes significant risks on everyone the public must be involved in deciding how to avoid or minimize those risks that was true before AI it needs to be true today with AI thank you for your work on these issues and I look forward to your questions so that was pretty crazy I

Whistleblower Testimony

mean um it just goes to show the pace of AI development and even whilst working at open AI some of the things that happen can truly even surprise you next we do get some information from Helen Tona about the deployment of gp4 and how internally they actually managed to deploy this system early against its you know standard safety procedure situations like uh process known as the deployment safety board which was set up internally um great idea for something to try and coordinate safety between open a and Microsoft when using openi products um still been reported publicly that in the early days of that process um Microsoft they're in the midst of planning a very big very important launch the launch of GPT 4 um

Safety Concerns

and Microsoft went ahead and launched GPT 4 to tens of thousands of users in India without getting approval from that deployment safety board um another example would be uh that there's been since I stepped off the board there's been uh concerns raised from inside the company um that in the leadup to the launch of their 40 model which uh was the you know the voice of assistant that had you know very um exciting launch videos was launched the day before a Google event that open I knew might upstage them um and there's been concerns raised from inside the company about um their inability to fully uh carry through the kinds of safety commitments that the company had made in advance of that um so there's you know there's additional examples but I think that uh those two illustrate the core point and next this is where we get into some really super interesting stuff because they actually talk about some of the things that have been going on at open aai with Sam Elman as many of you guys know he was actually uh in the same building last year testifying about AI safety and why of course you need rules and regulations so now that of course like a year later people from the board are stepping down and saying the look uh things aren't exactly going as to plan I mean it really is interesting to see how everything has evolved from the moment before so it will be interesting to see exactly how they discuss these issues in relation to that left the open AI board one of the reasons that you did so is you felt you couldn't do your job properly meaning you couldn't effectively oversee Mr Alman and some of the safety decisions that he was making um you had said this year that I'm just going to quote you that Mr Alman gave inaccurate information about the small number of formal safety processes that the company did have in place that is that he gave incorrect information to the board to the extent you're able can you just elaborate on this I'm interested in what's actually being done for safety inside this company in no small part because of what he told us when he sat where you're sitting thank you Senator uh yes I'm happy to elaborate to the extent that I can uh without breaching any confidentiality obligations um I believe that you know when the company has safety processes they announce them loudly and proudly so I believe that you and your staff would be aware of um of the processes they have in place um at the time you know one that I was uh thinking of which was one of the first uh formal processes um that I'm aware of was this deployment safety board that I just discussed um and this um this breach by Microsoft that took place in the early days there um since then they have introduced uh a

Board Oversight

preparedness framework um which I think is you know I want to commend many of these companies for taking some good steps I think the idea behind the preparedness framework is good um I think to the extent they execute on it that's great um but there have been concerns raised about you know how well they're able to comply with it they've also it's been publicly reported that um the you know really respected expert they brought in to run that team has since been reassigned from that role uh which you know I worry what that means for um for you know the influence that team is able to uh exert on the rest of the company um and I think that is illustrative as well of a larger Dynamic that I'm sure all the witnesses here today have observed which is there are really great people inside all of these companies trying to do really great things and uh the challenge is that if every everything is up to the companies themselves and to the leadership teams who are needing to make trade-offs around getting products out making profits uh attracting new investors um those teams may not get the resourcing the time the influence the um the ability to actually shape what happens that they need um so I think uh you know many of the Dynamics that I witnessed um Echo very much what I'm hearing from my fellow Witnesses that's very helpful this is where we get into a direct quote from Sam mman previous ly when he was here and it actually talks about what he said about the specific guidelines that opening ey is doing of course I do know about opening eyes preparedness framework and how they actually utilize this to ensure that all models that are deployed are actually ones that are safe let me just ask you this let me just put a finer point on it because Mr Alman as I said testified to us on this connection this past year here's what part of what he said we make meaning open AI significant efforts to ensure that safety is built into our systems at all levels and then he went on to say I'm still quoting him before releasing any new system open AI conducts extensive testing engages external experts for detailed reviews and independent audits improves the model's behavior and implements robust safety and monitoring systems in your experience is that accurate I believe it is possible to characterize the company's activities accurately that way yes the question is how much is enough who is making those decisions and what incentives are driving those decisions so um uh in practice you know if you make a commit commment you have to write that commitment down in some words and then when you go to implement it um there's going to be a lot of detailed decisions you have to make about what information is shared with whom at what time um who is brought into the right room to make a certain decision um is your safety team you know whatever kind of safety team it might be are they brought in from the very beginning to help with um conception of the product really think from the start about you know what implications this might have or are they handed something a couple of weeks before a launch deadline and told okay make this as good as you can do here I'm not trying to refer to any specific incidents at open aai I'm really referring to um again you know examples that I've heard reported publicly heard from across the industry um that there are uh there are good efforts um and I worry that we should not that if we rely on the companies to make all of those trade-offs all of those detailed decisions about um how those commitments are implemented that um they're unable they're just unable to fully account for the interests of a broad public and I think you know you hear this as well from people you know I've heard this from people in multiple companies you know sentiment along the lines of please like please help us slow down please give us guard rails that we can point to that are external that help us uh not only be subject to these Market pressures just this is one of the most fascinating parts of this talk because he actually asks our opening eye essentially doing enough and Helen Tona gives a response that is I mean it depends on whether or not you think AI is telling the truth because openi has predicted certain things about their future models these capabilities that are going to be pretty incredible if it is true and apparently if these predictions are correct then we might not be doing enough so I mean it depends on whether or not you think open AI is just leading a hype train or you think they actually have the next best model in general is your impression now is it was open AI doing enough in terms of its safety procedures and protocols to adequately vet its own products and to protect the public I think it depends entirely on how rapidly their research progresses um if their most aggressive predictions of how more how quickly their systems will get more advanced are correct then I have serious concerns if they their predictions their most aggressive predictions May well be wrong in which case I'm somewhat less concerned so this is where we get into the next section of the talk and this one is a little bit different because this is where we get David even Harris speaking about the kinds of policies that you know potentially going to be implemented it contains the key elements

Watermark Technology

that I would recommend even if I had not read it I would have said uh licensing and registration of AI systems and the companies that make them liability clearly holding AI companies liable for the products that they make and provenance giving people the ability to know what content is produced by Ai and humans those are just a few pieces that are already in the framework and I'm excited to see legislative text with the details of that now David even Harris actually continues to talk about the future of AI watermarking and the fact that currently there's no real way to tell if an output is AI generated or not and for those of you that have been keeping up with AI you'll know that like you know flux 1. 1 has just been absolutely incredible in terms of the realism it's able to generate with its images now of course in the future we know that this thing is only likely to get better so they're proposing ideas and essentially they're discussing ways that we could potentially get watermarking in advanced models in terms of videos images and of course text and you know right now it only seems like there's one company which is Google that seems to be making any efforts on that front there are multiple elements of provenance Technologies one is disclosure and yes that is saying that if a company uses Ai and has AI interacting with you that it should disclose that you are interacting with an AI system or a bot but another element so that's that number one is notice if I'm interfacing with a robot yeah or with artificial intelligence the the the owner of that artificial intelligence should tell me as a consumer I'm a robot yes absolutely okay that's number one tell me what number two is so number two is sometimes referred to as watermarking I know that this committee heard in April a testimony about this topic of water marking and watermarking can happen in one of two ways it can be either a visible or direct disclosure that content for example an AI generated image an AI generated audio file video or even text can be a direct disclosure that it's AI generated you've seen watermarks that have I get it so that's another form of notice yeah uh you could call it that so that's but that's direct watermarking then there's also another technique which is a more indirect disclosure which is hiding an invisible signal within the text of a piece of text that's generated by AI within an image which is a hidden pattern of pixels what good does that do in terms of a consumer well the good thing about that is that it can be much more difficult to remove than simply a notice that says this was produced by AI at the top or bottom of a picture that you could just crop out and remove very easily so watermarks have value in that sense there's also another technology that was also discussed in April here which is called digital fingerprinting uh this technology has been used to keep track of child sexual abuse material and terrorist content circulating online and that creates what's called a hash a unique identifier for images audio files could be even done with text or videos and that stored in database and that can associate a piece of content and then identify it as AI generated all these are forms of notice I would some more direct than others I would call them forms of I'm not trying to trick you I'm trying to understand are there any companies that are giving the world proper notice right now um anybody so I'm happy to answer that uh sure so according to their public statements yes or no please sir because I got other GR to cover Google

Google SynthID

Deep Mind synth ID is the name of the technology looks good I haven't tested it but they're trying so take a look at this is Google synth ID now I knew about this before because I've been paying attention to the AI industry but I didn't realize that in October which was this month that Google actually managed to update this to pretty much be able to AI Watermark any kind of content it's pretty crazy I personally think this kind of research is necessary because the future is going to be a really big blow of just AI generated images text and audio and humans are going to have a really bad time of discovering if something is AI generated or not Google has a short video that I'm going to play in a moment but I think it's really fascinating because it has watermarking for AI generated text there's a really cool way that it manages to do this with text that I'm going to explain in a minute and then it's got it for music and it's also got it for AI generated images and video which is like a digital secret Watermark I personally think this is needed because um some people would say yeah this is not needed until you are a victim of someone that uses AI generated stuff and I wouldn't want anyone to be a victim of that so I think this is probably going to become uh potentially part of what companies may have to do with their models in order to get that out into the public and I don't think this would be the worst thing ever because I think this is just something that's rather important that's needed to protect the public is this real or AI generated sometimes it can be difficult to tell for hundreds of years humans have used watermarks to prove where content was created One Challenge is developing a mark that's hidden to viewers but visible to those looking for it enter synth ID by Google deepmind it can produce a digital Watermark that's imperceptible to humans and works across Google products to tag AI generated images video audio and text The Watermark can withstand common editing techniques like reordering and trimming adding noise compression cropping and filters how for images it can be embedded directly into the pixels for video it also marks every frame for audio synth ID converts the signal into a spectrogram A visual representation of the sound waves into which it embeds the watermark before it converts everything back into a waveform for text syn ID looks at which words are going to be generated next and changes the probability for suitable word choices that wouldn't affect the overall texts quality and utility if a passage contains more instances of preferred word choices synth ID will detect that it's watermarked you can't read hear or see the watermarks so you can fully enjoy your Creations syn ID technology is already used across generative AI Google consumer products synth ID is just one tool we're using to make sure generative AI tools are built with safety in mind from the beginning empowering people and organizations to create responsibly obviously open AI in particular has experienced a number of high-profile departures including the head of its super alignment team Jan Yan U Leica who left to join arrival and thropic and upon departing he wrote on x quote I have been disagreeing with AI leadership about the company's core priorities for some time until we finally reached a breaking point he also wrote that he quote believe much more of our bandwidth should be spent getting ready for the next generation of models on security monitoring preparedness safety adversarial robustness super alignment confidentiality social impact and related topics these problems are quite hard to get right and I am concerned that we aren't on a trajectory to get there uh let me ask all of you based on your firsthand experiences um would you agree essentially with those points let me begin with you Mr Saunders and go to the others if you have responses yeah uh thank you Senator um

Team Departures

uh Dr Yan Leo was my uh manager for a lot of my time at openai and you I really respected his opinions and judgment um and I think you know what he was talking about were sort of a number of issues where open AI is not ready to deal with um models that you know have some significant catastrophic risks such as high-risk models under the preparedness framework so things that could actually like um start to you know Assist um novices in like creating biological weapons or like you know the systems that could start um conducting um unprecedented kind kinds of cyber attacks and so for those kinds of systems we're going to you know first we're going to need to nail security so that we make sure that those systems aren't stolen before we figure out what they can do um and used by people to cause harm um then we're going to need to figure out how do you actually like deploy some system that under some circumstances will you know could help someone construct a biological weapon but lots of people want to use it for other a bunch of other things that are good so every AI system um today is vulnerable to something called jailbreaking where people can come up with some way to you know uh convince the system to provide advice and assistance on anything they want no matter what the companies like you know have tried to do so far um and so we're going to need to have solutions to hard problems like these um and then we're going to need to you know have a h have some way to deal with um you know models that again might be uh you know smarter um Than People supervising them and might you know start to like autonomously cause um certain kinds of risks and so I think yeah I think he was speaking to you know again just like a number of areas where the company was not being rigorous with the systems that we currently have which you know again can maybe can amplify um some kinds of problems but once we reach the point where like you know catastrophic risk is possible we're really going to need to have her shut together so that was of course William Saunders talking about the fact that the super alignment team basically just got completely disbanded which was to everyone's surprise considering the fact that openi at the time decided that they were going to dedicate 20% of all compute to that issue so that was just you know completely surprising and then of course this is where we have William Saunders actually talk about the fact that one of the main problems with this is that when employees try and leave open aai although now this is of course changed there was a really bad non-disclosure agreement or some type of agreement just basically meant that you lost all of your Equity that was just really difficult slash put people in a bad position if they left open a ey and said anything bad about the company at the time um a number of you have mentioned

Legal Restrictions

whistleblowers and the need for protecting them maybe um anyone who would like could expand on that point uh you are all insiders who have left companies or disassociated yourself with them in one way or another and uh I'd be interested in your thoughts Mr Saunders yeah um thank you Senator um so when I resigned from openai I sort of um found that you know they gave to every departing employee a like uh restrictive non-disparagement agreement and you would uh lose you know all the equity you had in the company um if you didn't sign this agreement where you had to effectively not criticize the company and not tell anybody that you'd sign this agreement um and you know I think this really opened my eyes to the kinds of um legal situation that you know employees face if they want to talk about uh problems of the company um and I think you know there's a number of important things that like employees want in this situation um so there's like knowing who you can talk to it's very unclear sort of like uh what parts of the government would you know be uh have expertise in you know specific kinds of issues you want to know and that you're going to talk to somebody who understands um you know the issues that you have um and you know has some ability to act on them and then you also want you know um legal protections um and this is where I think it's important to Define protections that um you know don't just apply when there's a you know suspected violation of law but sort of um suspected like harm or risk um imposed on society and you know so that's why I think um yeah legislation needs to include um you know yeah establishing you know whistleblower points of contact and these protections next this is where we get into the timelines and some of the severe implications that could happen if at the time of AGI we don't actually have any guard rails in place so this is one for those of you who are looking to see when AGI could potentially occur um yeah I

AGI Timeline

think there's yeah I think when I thought about this there was like a you know at least a 10% chance of something that you know could be catastrophically dangerous within about 3 years and I think um you know a lot of people inside of opena I also you know would talk about similar things and then I think you know without knowing the exact details right like it's probably going to be longer I think you know that I did not feel comfortable continuing to work for an organization that wasn't going to like take that seriously and do as much work as possible to deal with that possibility and I think we should um figure out regulation to prepare for that because I think you know again it's if it's not 3 years it's going to be 5 years or 10 years um you know the stuff is coming down the road um and we need to have some guard rails I think the good news I have for you is that the answer to that question might not be that important for you the reason I say that is that my background working on AI is in other areas not about artificial general intelligence but about Ai and bias Ai and deep fakes in elections and more recently Ai and harm to children that said I'm lucky to have an excellent colleague at UC Berkeley Stuart Russell who's also appeared before this committee Stuart has invited me to a couple of his conferences of his lab where I am in a room surrounded by hundreds of people with computer science degrees phds who've dedicated their lives and their careers to stopping artificial intelligence from eliminating our species it was a little bit difficult for me trained as a sociologist to decide where I came down on this issue but as the conversation moved to policy and what policies we need what I found was that there was almost no disagreement between me thinking about the problems that I've focused on in my work and theirs the same issues liability licensing registration many of them are also actually quite concerned about deep fakes and deception so provenance those key Solutions are the solutions that we need to address the issues of AI bias discrimination disinformation interference in elections harm to children and the Spectre of AI artificial general intelligence or superintelligent AI being abused by Bad actors to harm us in catastrophic ways so now there was this also so now at the end there was this very interesting discussion that was basically talking about the fact that the way we're developing AGI is pretty risky if we just develop a model that is completely good at everything this could have severe implications because we might not know how it works what areas it's bad at but if we develop AGI that is Task specific which is I guess you could say kind of similar to narrow AI but just more like an AGI so like an AGI for I don't know economics an AGI for healthcare YouTube anything that could exist as an AGI for that um and with that you're able to you know more successfully predict the you know the harms of that software and you're able to have a model that is rather safer I'm not sure if this is how they're going to do it but I wouldn't be surprised if there were you know really specific models that were really super Advanced but only for specific things and of course there's the debate that you know if you want to actually get to those uh levels you do need to have some kind of base level of intelligence but it will be surprising to see how this kind of evolves um in the future I think it's

Task Specialization

useful for me to say as someone who works on AI who has done a lot of work on rigorous evaluation this sort of thing I don't know what it means for an AI system to have human level intelligence what I do understand that I think is related is to have a system that can do well on a lot of different tasks that are also important for people uh that people also do those kinds of comparisons I think we make a mistake when we group everything together and say this is basically like humans and I want to say that I think the Enterprise of AGI might be inherently problematic when it's not focusing on what the specific tasks that the systems should be solving are so while there's been an interest in AGI so that lots of different things might be done it might be beneficial to think about what those specific things are and creating task built models for those specific tasks and that way we have a lot more control over what's happening we can do much more rigorous analysis of input and outputs we keep things closed within the specific domains where a system is meant to be helpful

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник