# Ilya Sustekever Finally Reveals Whats Next In AI... (Superintelligence)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=9zPOMURyqWE
- **Дата:** 12.11.2024
- **Длительность:** 9:48
- **Просмотры:** 60,213

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://x.com/slow_developer/status/1851635684081127691
https://x.com/ClintonBWill/status/1666225432876703750

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=9zPOMURyqWE) Segment 1 (00:00 - 05:00)

well if you think about the evolutionary history of humanity so 4 billion years ago there was a single cell some kind of a replicator then about a number of billions of years you had various different single cellular organisms then about a billion years ago you had multicellular life several hundred million maybe reptiles 60 mammals 10 million years ago you had primates 1 million years ago you had the homo sapiens then 10,000 years we had the writing then you had the farming revolution then the Industrial Revolution the technological Revolution and now finally the AGI the super intelligence it is the final the ultimate Challenge it can create a life of unimaginable Prosperity which Sam alluded to but it is also a great challenge so this is the article that actually has the information where El satova finally speaks after a long Hiatus and this article is from reuter where it essentially talks about how open Ai and others are seeking A New Path to Smart AIS as current methods hit their limitations now if you're wondering who Reuters are and if they are actually a reliable source of information if you actually remember what happened quite some time ago Reuters were actually the company that was able to give everyone this piece of information which I don't want to say broke the internet but it was really important to understanding where the pace of AI development was at the time and this was the article that basically warned of qar and it said that ahead of opening eyes days of turbulence basically there was a letter to the board which warned the directors of a powerful AI discovery that could threaten humanity and I remember this is why qstar started going completely viral but in this article what they focus on is that they focus on the company open Ai and the entire AI industry so they're basically say that look these companies are facing unexpected delays and challenges in the pursuit of ever bigger llms by developing training techniques that use more humanlike ways for algorithms to think and if you've been paying attention to the AI space you know exactly the kind of methods that we're using right now which are essentially where we have the models think at inference time in order to generate a more coherent and a more accurate response but what the crazy thing is that of course this new paradigm although it does exist and although it's going to be you know scaled up of course they're talking about the GP T Series and of course you can see they're basically stating that but now some of the most prominent AI scientists are speaking about the limitations of this bigger is better is philosophy so this is the part where they talk about how when chat GPT was released the main thing that people were focused on was essentially the fact that scaling up current models through adding more data and Computing will consistently lead to improved AI models which was basically the law that the more data you add to the model the better these model get in terms of their coherence in terms of response and in terms of how smart these models get but of course we have to remember that it seems now that there are several limitations to this bigger is better philosophy which of course does make sense now it isn't only Reuters that have actually commented on this phenomenon if you watch my video from two to 3 days ago you remember I spoke about this article from the information and this basically spoke about how opening eyes next series of models which is of course Orion how that model isn't it reliably better than its predecessor at handling certain tasks are T into the employees and apparently you know Orion is performing better at language tasks but isn't performing better at coding tasks and the fact that once again this Orion situation could test a core Assumption of the AI field known as the scaling laws and that this is why companies are now moving towards this different area where they're going to be improving reasoning after their initial training now of course most you guys might be wondering what exactly did ilos Sasa say but of course this is what he actually said and I think this statement is really important because ilas satova is one of the innovators in the field of AI and he was really integral to opening ey success and he basically said that the results from scaling up pre-training which is where they have the phase of the AI model that uses a vast amount of unlabeled data to understand language patterns and structures apparently this area of growth has actually plateaued so apparently this area is something that is not doing too well at the moment and that the growth here is slowing down and the C crazy thing about this is that open AI have now said this IIA satova is now saying that look pre-training isn't as effective as it was when we were in the early days and the crazy thing about this is that even if you've been paying attention to some of the things that are going on in the industry at other companies too a couple of days ago there was this article from The Verge that basically spoke about how even Google are facing these kinds of issues and The Verge basically commented on the next version of Gemini and apparently Gemini is going to be relas widely soon and it

### [5:00](https://www.youtube.com/watch?v=9zPOMURyqWE&t=300s) Segment 2 (05:00 - 09:00)

says here that I've heard that the model isn't showing the performance gains the Demis hassabis Le team had hoped for though I would still expect some interesting new capabilities and the chatter I'm hearing in AI circles is that this trend is happening across companies developing leading large models so this is something that we can see right here that shows us that look this isn't just an open AI situation Elia satova company situation this is a phenomenon now leading into AI if you're wondering why some of these companies are facing many issues with scaling up these models some people do believe that won't lead to Greater intelligence at all but just better memorization machines if you scale up the size of your database and you cram into it uh more knowledge more patterns and so on uh you are going to be increasing its performance as measured by a memorization Benchmark that's kind of obvious but as you're doing it you are not increasing the intelligence of the system one bit you are increasing the skill of the system you are increasing its usefulness its uh scope of applicability but not its intelligence because skill is not intelligence and that's the fundamental confusion um that that people run into is that they're confusing skill and intelligence now I don't think this is as bad as it might seem to be because there's also additional information that I need to share with you so the article continues to talk about how satova was widely credited as an early advocate for achieving massive leaps in generative AI through the use of more data and compute in pre-training and power and all that kind of stuff when they created chat GPT and of course now he left to start his own company safe super intelligence however this is what elas Sask is saying he's saying that look the 2010s were the age of scaling now we're back in the age of Wonder and Discovery once again and everyone is looking for the next thing and S I said scaling the right thing now matters more than ever which means that we've entered the next Paradigm of AI which means that it's quite likely we're on the bottom of this S curve growth once again and I will explain that to you guys in a moment with a small diagram but this is basically what that means so when we do look at what the kind of growth that we do have just bear with me for one second so this is the kind of graph that I'm going to show you guys in a second but basically there's S curve so of course you have this first area where you start you go up and then of course there's a decline / plateau and of course I want you to focus on the this area right here where there's these bubbles of innovation as these S curves go up and then basically this is what we're seeing with these llms right now so if you guys do remember the areas where we had gpt2 of course we had gpt3 and then of course the reason we had that major jump is because with S curve growths you get that major jumps from gpt3 to that GPT 4 area and it seems that now leading up to the area where we're going to go to Orion it seems that there is that you know slow Peter off as we get to the top of the benchmarks and as things start to slow down but remember that doesn't mean that things are slowing down because of course now we are at that new paradigm where you can see that once again we have that bubble of innovation so here you can see that there is a new paradigm which is the 01 series we've of course got the O2 series and then things are going to start to get crazy once we get to the 03 and potentially 04 series before once again potentially slowly petering out before another s-curve growth again to potentially lead to ASI so this is how a lot of graphs do depict the kind of grow that we do have of course I just created this as a visual demonstration to show you guys exactly what's going on but when we look at the Times 2025 2026 and 2027 this could be the short paradigms for where we have these S curves growth up Esco growth up and then these bubbles here and here where we switch Innovation so of course right now of course we're switching Innovation to the test time compute Paradigm and I do wonder what comes after that because it might lead to some ASI now ear also talks about what super intelligence is why did we choose to use the term super intelligence the reason is that super intelligence is meant to convey something that's not just like an AGI with AGI we said well you have something kind of like a person coworker super intelligence is meant to convey something far more capable than that when you have such a capability it's like can we even imagine how it will be but without question it's going to be unbelievably powerful it could be used to solve incomprehensibly hard problems if it is used well if we navigate the challenges that super intelligence POS poses we could we could radically improve the quality of life but the power of super intelligence is so vast

---
*Источник: https://ekstraktznaniy.ru/video/13758*