# New SELF IMPROVING AI, Major AI Breakthrough, New AI Robot + More (AI NEWS #17)-

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=SZ7ycinkoxM
- **Дата:** 13.10.2023
- **Длительность:** 15:05
- **Просмотры:** 9,552
- **Источник:** https://ekstraktznaniy.ru/video/14725

## Описание

https://twitter.com/emollick/status/1707076651320770870/photo/1 
https://twitter.com/hellokillian/status/1706771425909170600 
https://photoshop.adobe.com/discover 
https://twitter.com/rowancheung/status/1709018669693718546
https://twitter.com/SnackPrompt/status/1710102419005120758 
https://twitter.com/SnackPrompt/status/1710127350770201035 
https://twitter.com/SnackPrompt/status/1710060992032284996 
https://www.youtube.com/watch?v=boq2If6BT1s&t=884s  
https://twitter.com/saana_ai/status/1711977966660628569 


Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLea

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so the CEO and co-founder of anthropic AI did say something in a recent interview which did give me the chills he talked about the percentage of AI doom and it's quite concerning when the leaders in the industry are giving a such high of a percentage in terms of the absolute destruction that we could see from rampant AI that just seems to be uncontrollable and the percentage that he gave in this clip is pretty shocking but I want you all to watch it do you think about percentage chance Doom I think I've often said that my chance that something goes you know really quite catastrophically wrong on the scale of you know human civilization you know it might be somewhere between 10 and 25% when you put together the risk of something going wrong with the model itself with you know something going wrong with human you know people or organizations or nation states misusing the model or it kind of inducing conflict among them or just some way in which kind of society can't handle it that said I mean you know what that means is that there's a 75 to 90% chance uh that this technology is developed and and and everything goes fine so yeah that was the first part of the clip and of course you might be thinking this is just hearsay but I do think that it is vital that these leaders in these AI Industries you know for example this guy if you don't know what anthropic is it is the company that built Claude 2 the very big llm that you can input huge pieces of text to into in fact I think if everything goes fine it'll go not just fine it'll go really great um again this stuff about curing cancer I think if we can avoid the downsides then the stuff about you know about curing cancer extending the human lifespan um you know solving problems like mental illness I mean this all sounds utopian but I don't think it's outside the scope of what the technology can do so you know I I often try to focus on the 75 to 90% chance where things will go right and I think one of the big motivators for reducing that 10 to 25% chance is you know how great it you know is trying to increase the good part of the pie um and I think the only reason why I spend so much time thinking about the 10 that 10 to 25% chance is hey it's not going to solve itself you know I think the good stuff you know companies like ours and like the other companies have to build things but there's a robust economic process that's leading into the good things happening it's great to be part of it it's you know it's great to be one of the ones building it and causing it to happen but there there's a certain robustness to it and you know I find more meaning I find more you know when this is all over I think you know I personally will feel I've done more to contribute to you know whatever Utopia results um if we focus you know if I'm able to focus on kind of you know reducing that risk that it goes badly or it doesn't happen because I think that's not the thing that's going to that you know happen on its own that shows us okay that this is really an issue which we are facing so what are your thoughts on this because I genuinely feel like this is something that isn't getting enough attention and I feel like even when it does get enough attention people just think people are talking about sci-fi stuff that isn't really true and they haven't read the research papers but we have to understand that if the leaders of these companies are stating this then it's likely that these threats are very much surreal so of course as you all know D 3 was released just a couple of days ago and it definitely took the World by storm because mid Journey was previously hailed as the very best in terms of AI image creation but now on our screen you can see that we have three different companies all competing in the same space for the same customers and it seems like they're actually all pretty similar by the fact that Dar 3 and mid Journey are on the same level whereas Adobe isn't as good but if we take a look at some of these examples you'll see just how these different softwares interpret the various different text inputs and what kind of images they output so for the first example you could already see is a portrait shot of a woman yellow shirt photograph this was a thread that I did find on Twitter I will leave a link to this in the description and it really just goes to show that like even on the second one here portrait of a superm model big glasses Y2K Aesthetics Chrome IC blue bright just how these different image models do interpret the data and I do think that what we're seeing with dly 3 is very promising in terms of it being up to scratch with mid journey and remember that Dary 3 where it does Excel even though you might think mid journey is better is the fact that it does have text so I wouldn't be surprised if in the next update to Mid Journey they do include text because of course they are currently working on 3D then of course we have the example of a little girl holding a teddy bear in the middle of nowhere with a photograph I think all of these are relatively really good then of course we have an arctic fox in the tundra lights teal and Amber minimalist photograph then of course we have a confused woman sci-fi future blue

### Segment 2 (05:00 - 10:00) [5:00]

go then of course we have confused woman sci-fi future blue glow color orange hologram photograph I think all of these look absolutely stunning then of course we have this one which I think the only one that really does get it right in terms of low poly is the one on the right which is Adobe and I'm pretty sure that's because they sourced all the images that are essentially more art based rather than mid Journey focusing on more realism and I'm pretty sure as well that if you didn't know mid Journey does actually have several different versions that are pretty good when it starts from V4 so I think mid Journey overall is definitely um what most people are going to use then of course we have a beautiful woman Future Funk psychedelic a shocked beautiful alien sci-fi future light teal and Amber photograph then we've got a mosaic of a colorful mushroom with intricate patterns vibrant and detailed sharp Mosaic background Vector art and I think this one shows how similar they are and of course we've got this last one right here now of course this is still early I mean it's still even shocking that this technology is even possible and I am still surprised at how quickly this technology did get as good as it is and I'm still surprised that this is even a reality I mean being able to generate this out of text to a completely new image is absolutely insane but it just goes to show that will this be the case in a couple years time for video will there be video models that can instantly generate AI generated videos or is this just part of the s-curve explosive growth that we are experiencing in terms of AI so now this was something that was not really covered anywhere and I will be doing a full Deep dive on this absolute insane research paper because it essentially says self-tour Optimizer recursively self-improving code generation and for those of you who don't understand I'm just going to explain it in a little bit essentially Microsoft research and Sanford University have come up with this paper in which they explore the ability to recursively self-improve essentially what that means is an AI that can improve on its own without the need for human input and I don't need to get into all the uh sci-fi scenarios here but essentially if this is theoretically possible and if this Loop is somewhat possible it means that if an AI can self-improve it means that it could possibly in theory get infinitely better which means that of course GPT 4 could essentially create GPT 5 and GPT 5 could create GPT 6 now in this paper there are different examples on how they show like how this could potentially work there's a lot of stuff that they go into um it's definitely not too easy to understand but I will be doing a full video on that and one of the things you do need to understand about this is because with recursive self-improvement it's been a Hot Topic in terms of theory and we've heard a lot of people discuss this it's the fact that there are many different unpredictability factors that could arise you know a self-improving AI system can evolve in ways that aren't anticipated by the people who created it so as the system continues to self-optimize it might reach states that weren't foreseen leading to unexpected behaviors that could be harmful of course as well we have to think about the loss of control I mean imagine if it keeps improving without human intervention there's a risk that we might not be able to stop what it you know it self-improvement Loop and then of course and of course in the paper's abstract there's an evaluation of how frequently the generated code tries to bypass a Sandbox so if a self-improving AI system learns to circumvent security measures it could be exploited to inadvertently create vulnerability so I mean there's tons of different issues that could happen here but I think this goes to show the direction we're moving in and if AI does continue to move at this speed with recursive self-improvement with image capabilities with all of this stuff um it definitely is very crazy then of course we had wave ai's generative AI model GA and essentially this model was built to generate realistic driving scenes to improve the safety of self-driving cars in the real world and I've got to be honest I've seen research papers before from Nvidia on this and it does look pretty realistic but I'm pretty sure this is an optimized system now what we have here I mean I could be wrong maybe they didn't use the data from Nvidia and maybe they're just using their own technology but um it definitely does look really realistic I mean in terms of a generated AI video I would have to say that this is by far some of the realistic looking and one thing I do know about AI generated videos is that a lot of times it looks very blurry or moving or it just looks like this kind of weird like you know trippy kind of video but with this kind of thing I think this definitely looks really real and the realism there is in the I guess you could say mistakes okay um and the reason I say that is because anytime you shoot a video you know it's real because it isn't absolutely perfect the lighting isn't perfect if you to shoot a video on your phone or to take a picture of iPhone everything's not perfect but that sense of you know mistakes and the sense of imperfections give it complete realism it's like when AI generates skin and the skin is far too smooth you instantly know that it is AI generated so with that being said I do think that in the future um generated a videos like this that were going to be able to train these models is going to be absolutely incredible so we're definitely going to

### Segment 3 (10:00 - 15:00) [10:00]

see a ramp up in that sector too then of course this is the earth shattering news and of course we're going to do a deep dive onto this as well because I wanted to get into all the nooks and crannies of exactly what this paper is so you can see right here uh this guy quote tweeted this tweet okay um and this was quote tweeted from a tweet from anthropic I'm not sure why okay so Twitter isn't working at the moment but I can still show you the Tweet okay so essentially Twitter isn't working right now as I'm making this video but there was a tweet drum and propic in which they talked about a very big breakthrough okay so of course you can see the quote tweet here that says this is Earth shattering news the hard problem of mechanistic in interpretability has been solved the formal cautious stash technical language of most people commenting on this obscures the gravity of this essentially what this means is that AGI is not going to be safe but super intelligence is coming so essentially this switter thread dives into it more but of course you can see right here he says mechanistic interpretability it's figuring out exactly how AI models work on an Atomic level so on the individual level of neurons now the reason why this is a breakthrough is because previously we've always said that AI is like a blackbox although we kind of know what these models kind of do we don't exactly know what's inside there on the base level previously we talked about in a video where someone managed to figure out what one of the neurons was thinking and it talks about you know just just rra your head around this it talks about how AI Edition was essentially rotation around a Circle and it's just a very interesting insight into how these models think vastly different to us where we might think about you know 2 plus 2 is two apples at two apples and you get four apples but it's thinking about rotation around a circle okay so it's definitely very interesting to now have a breakthrough to where we're likely going to have a complete understanding of these large language models to completely understand where they are which essentially means that this blackbox problem of not knowing what's in there or what's even going on means that we can you know ensure that these models are safe before they're being widely distributed so that will be something that needs to be explored further and it's definitely something that we're going to make a full video on but like we stated it's genuinely surprising at how many breakthroughs are coming every single week and I really can't believe that this is even moving as fast as it is and I do Wonder as to how quickly this is going to continue moving or if there's going to be some slowdown period because it does feel like we're moving at 100 mph then of course if you weren't aware meta have been ramping up their efforts to compete with AI chat bot as chat GPT and other software such as mid journey in terms of launching their own meta celebrity AI thing there's not really much you can say about this other than it's interesting because I'm not sure if this is going to be a fad or it's going to be something that's likely adopted I wish we could see a number on this in terms of the figures in terms of people are actually using the software because I personally wouldn't use a software I'm not sure of the real application since it's just an AI not the actual person but it would be interesting to see if this does actually talk like the real person now of course you can see a video from a user who's interacting with this and I do think that for the younger audiences it definitely might be fun and of course for the older audiences who don't understand this kind of stuff it definitely is going to be interesting to them as well but I do also think that come a point in the future maybe 10 years from now if they're able to get all the data from a said person in fact you know all their messages they've ever sent and they trained it on that data I mean how different would the conversations really be if the AI was specifically trained on all of that person's data so I do know that's what they probably tried to do but I do think that is definitely difficult because even if I was a celebrity I wouldn't hand over all of my personal conversations to a company I'm pretty sure they have some strict guidelines on what they would actually say because of course you don't want AI to start going off the rails um and especially if you are a celebrity your Public Image definitely must need to stay protected so Disney research just revealed a new research robot and apparently it's giving people War Le Vibes now it also uses reinforcement learning to walk and it interacts with people so I think this robot right here is uh crazy because I was waiting for the day that Disney finally decided to do something like this because Disney already has a large array of robots like this that they use for their shows for entertainment for kids for all that stuff and their robots are really good in terms of the way that they're able to move in humanlike interaction so I do think that if Disney is able to you know craft something where they merge AI with this we are definitely going to be living in the future because they are a real Step Ahead in terms of the realism of these robots and I think people really underestimate what Disney have on their hands because they just haven't seen it in the mainstream yet and the mainstream focus is of course been Boston Dynamics but Disney are definitely there and it's going to be absolutely insane because companies as you know like open AI are pouring millions and millions of dollars into humanoid robots so it wouldn't be a surprise if Disney comes out with their

### Segment 4 (15:00 - 15:00) [15:00]

own AI robot which is uh roaming around the streets very soon
