Open AI Team Surprises Everyone With Statement On Artificial Superintelligence
8:48

Open AI Team Surprises Everyone With Statement On Artificial Superintelligence

TheAIGRID 28.05.2023 69 706 просмотров 955 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
https://openai.com/blog/governance-of-superintelligence Exclusive VidIQ Deal (Only $1) - https://vidiq.com/theaigrid/ Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos. Was there anything we missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience #IntelligentSystems #Automation #TechInnovation

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

Sam Altman the CEO of openai Greg Brockman the chairman of openai and Ilya sutskova the chief scientist of openai all published a blog post today on a very important threat regarding artificial intelligence that many people haven't really covered and this is something that goes beyond our current understanding and current levels of artificial intelligence which is essentially artificial super intelligence and in this video we'll discuss exactly why this is a very important post that they made and the very important key points that they talked about but before we do dive into the exact video I need to explain to you one of the key terms that is mentioned in this blog post and the key term is super intelligence so super intelligence is very different from artificial general intelligence which you have likely heard over the coming weeks because essentially AGI is what we are somewhat close to and it's a form of artificial general intelligence that can perform at human level against every single task now what this blog post talks about is it talks about super intelligence and that refers to a hypothetical form of artificial intelligence that surpasses human intelligence in virtually every single aspect so it represents an AI system that exhibits intellectual capabilities far superior to those of human beings across a wide range of tasks and domains and the concept of super intelligence is often associated with the potential for future development of AI systems that possess a level of general intelligence comparable to or Beyond human intelligence so the development of this kind of intelligence would have profound implications for society where it may possess the ability to rapidly improve itself or replicate itself and leading to Greater intelligence capabilities now of course this is something that is hypothetical but it is a real possibility in the future as we start to train these large language models and they get increasingly smarter and of course there are emergent capabilities right here you can see that it says governance of super intelligence and now is a good time to start thinking about the governance of superintendent intelligence the future AI systems that are dramatically more capable ever than AGI now I think it's interesting that they are actually thinking about this because AGI isn't even here yet but it definitely means that we are really close to AGI if they are thinking about governance of ASI now essentially it goes on to say given the picture as we see it now it's conceivable that within the next 10 years AI systems will exceed expert level in most domains and carry out as much productivity as one of today's largest corporations so that essentially means that these AI systems are going to be able to do what one company could potentially do now and that is insane because of course as you know with exponential growth we do have unlimited upside now it does continue to say that in terms of both potential upsides and downsides super intelligence will be more powerful than other Technologies Humanity has had the content with in the past and we can have a dramatically more prosperous future but we have to manage the risk to get there given the possibility of existential risk we can't just be reactive oh essentially what this means that this existential risk they essentially mean that this could pose a threat to the entirety of humanity which means that you can't essentially wait for AI to do something wrong before you input the regulations I mean something that is essentially a common use example is flight travel you know that there's this thing called the TSA wasn't there before we had essentially severe events that forced aviation industry to get much more stricter with its rules but now as you know it's very strict when it comes to Flying the thing is with AI though is that if you wait too long and you wait for something to go wrong before you decide to implement some kind of rule or policy it possibly might be too late because the risk is existential meaning that it could result in an extinction level event so it goes ahead to talk about how nuclear energy is the most commonly used example of a technology with this property and synthetic biology is another example so we must mitigate the risks of today's AI technology too but super intelligence will require special treatment and coordination for those of you who think that this might be some kind of exaggeration take a look at this clip of Sam Altman testifying in front of Congress about AI safety and regulations and informing them on what the future of AI is going to be look we have tried to be very clear about the magnitude of the risks here um I I think jobs and employment and what we're all going to do with our time really matters I agree that when we get to very powerful systems the landscape will change I think I'm just more optimistic that we are incredibly creative and we find new things to do with better tools and that will keep happening um my worst fears are that we cause significant we the field the technology the industry caused significant harm to the world uh I think that could happen a lot of different ways it's why we started the company it's a big part of why I'm here today and why we've been here in the past and we've been able to spend some time with you I think if this technology goes wrong it can go quite

Segment 2 (05:00 - 08:00)

wrong and we want to be vocal about that we want to work with the government to prevent that from happening but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that so it's clear from that statement from some ultimate that the time to regulate AI is now we simply can't waste any time debating whether or not certain countries should start to regulate AI we need to do it as soon as possible before many different repercussions that we may not even think of could happen and one small example of Bad actors taking advantage of the rise in this very efficient technology is recently the AI image generation softwares have been getting remarkably good now essentially what someone did we aren't sure where this sources come from but they were able to entirely crash the stock market briefly with an AI generated image so this AI generated image that you're seeing on screen actually managed to spread with quick succession to the point that the markets actually decided to plunge a significant amount of points now once it was revealed that these images were actually entirely AI generated and there was no explosion near the Pentagon the s p and Bitcoin did rally back to their usual levels but this just goes to show what could potentially be the scenarios where many different Bad actors do have access to these tools and they're able to distribute these images or maybe use AI software to essentially potentially attack a platform or just potentially absolutely anything that you can do with the increasing capabilities of AI now something that was recently brought up for one of the capabilities that AI is potentially going to be used for when it comes to Bad actors is of course biological agents and this just essentially means biological warfare or creating harmful pathogens that can essentially cause viruses or engineered pandemics this isn't something that is just some kind of hypothetical scenario this is something that has been ran on before you see weaponizing AI machine learning model creates 40 000 new chemical weapons in six hours you see the share capabilities of machine learning allows it to create tons and millions of different combinations that essentially allows it to create workable pathogens that can affect humans in a negative way and of course we know that humans aren't the very best when it comes to time efficient strategies such as making these kind of pathogens this is where machine learning and AI comes in and they can literally perform many of these computations while we do other things and this article goes on to further explain how this was done it said collaboration Pharmaceuticals retooled and AI drug Discovery system to successfully identify 40 000 new potential chemical weapons in just six hours now this is pretty crazy because other people are also saying buy a weapon is designed by AI a very near-term concern Schmidt says so artificial intelligence could bring about biological conflict said former Google chief executive Eric Schmidt who co-chaired the national school Security Commission on artificial intelligence then of course we had open AI statement which continues to say second we believe it would be unintuitively risky and difficult to stop the creation of super intelligence so essentially what they're saying in the statement is that super intelligence is going to come we can't just completely stop it because now that the genie is out of the bottle Transformers are out and people understand how to build these Transformers and of course people understand how to train these large language models machine learning is going to advance year over year to the point where eventually after AGI the only thing next is artificial super intelligence and they're saying that currently we just need to govern it because stopping it is pretty much impossible I mean like we saw from the letter even if openai does stop training gbt5 because of course it is the most powerful model other companies and other smaller organizations aren't going to stop trading it so right now we do need some regulatory oversight if we don't this is where things do get out of hand

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник