ONE MONTH LEFT! New MAJOR Robotics/AI Breakthrough, ChatGPT Loses Its Mind, Google Gemma, Major AI
30:37

ONE MONTH LEFT! New MAJOR Robotics/AI Breakthrough, ChatGPT Loses Its Mind, Google Gemma, Major AI

TheAIGRID 23.02.2024 64 517 просмотров 1 578 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
✉️ Join Our Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid 🐤 Follow us on Twitter https://twitter.com/TheAiGrid 🌐 Checkout Our website - https://theaigrid.com/ Links From Todays Video: https://www.reddit.com/r/singularity/comments/1avg3q3/by_far_this_is_the_best_sora_video/ https://twitter.com/model_mechanic/status/1759343673484165262 https://status.openai.com/incidents/ssg8fh7sfyz3 https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fvlpjneb9pujc1.jpeg%3Fwidth%3D1284%26format%3Dpjpg%26auto%3Dwebp%26s%3D42c0f07cc79c1107bfb66745d5d4c98d437280f2 https://www.reddit.com/r/ChatGPT/comments/1avv8w7/yo_gpt4_just_went_full_hallucination_mode_on_me/ https://www.youtube.com/watch?v=9ueDd5-NZco https://stability.ai/news/stable-diffusion-3 https://twitter.com/NPCollapse/status/1760237126908518523 https://twitter.com/elonmusk/status/1760504129485705598 https://www.reddit.com/r/ChatGPT/comments/1awzcw9/sometimes_you_just_need_to_point_out_to_the_ai/ https://www.bloomberg.com/news/articles/2024-02-22/google-to-pause-gemini-image-generation-of-people-after-issues-lsx286rh https://www.wired.com/story/deepmind-ceo-demis-hassabis-interview-artificial-intelligence-scale/?redirectURL=https%3A%2F%2Fwww.wired.com%2Fstory%2Fdeepmind-ceo-demis-hassabis-interview-artificial-intelligence-scale%2F https://www.theinformation.com/articles/the-magic-breakthrough-that-got-friedman-and-gross-to-bet-100-million-on-a-coding-startup?rc=0g0zvw https://twitter.com/xiao_ted/status/1760591701410799682 https://twitter.com/BerntBornich/status/1760546614530228450 Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos. Was there anything we missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (7 сегментов)

Segment 1 (00:00 - 05:00)

an insane week in AI it's important that we make sure we stay on top of the AI news and that's what this video is going to show you everything you missed and apparently this first piece of news shows us that it might just be one month until everything changes and if you don't believe me take a look at this so there was a recent tweet by Ted and this is someone that actually works at Google AI gooogle deepmind and essentially this person is tweeting about Robotics and scaling large models and essentially he is someone that is working in the field of Robotics now this tweet that you can all see right here if I zoom in he stated that there will be a three four massive pieces of news coming out in the next weeks that will rock the Robotics and AI space adjust your timelines it will be a crazy 2024 now of course underneath that he does state that it is not AGI and some people are stating that you know vague posting is not permitted but I did find it quite fascinating that he wasn't the only person that stated that this will be crazy I'm going to come back to this tweet cuz um despite from him someone else is also getting I'm guessing whatever is going on in the robotics Community potentially it seems like there has been some kind of internal thing that's been found within the robotics Community I don't pay that much attention to robotics but I will be of course due to the future but burnt also said and this is the guy who is the CEO and the founder of essentially 1X Robotics and this is the company that is working on fullon endtoend autonomy essentially stated that new progress update on the droids dropping in 4 weeks looks like mavic's Paradox might be false and we just didn't have the data and if you don't know what moric Paradox is essentially the theory that what is difficult for robots is really easy for humans and what is of course easy for robots is really difficult for humans so you know how you know AI systems and robots like they can like literally just do maths really quickly and for humans sometimes we do struggle of humans we have great dexterity we can grab things like an apple we can peel a banana and it you know literally we can run off like a Dorito chip for an entire day that is not something that robotics can do so it seems that maybe there might be some kind of breakthrough that's going on now what's crazy about this is that of course um with burnt and 1X they seem to have already made some kind of breakthrough because what we're seeing right here in this video where we see these robots doing this end to end autonomy this is one of the first kind of death in which we've seen recently the kind of Robotics being able to do this at such a really decent rate so it seems that whatever kind of you know barrier that was stopping them previously maybe they've made that and it seems that maybe across the board it has been shared with these teams and it seems like you know since the timelines because this guy said dropping in four weeks and then of course someone said that there is a massive update coming out in the next couple of weeks that will rock the Robotics and AI space um I think it could be something really crazy maybe it's going to be a new architecture that combines the recent technology that we've seen with the robotics architecture maybe it's going to be something different honestly have no idea but what I do know is that someone's saying that adjust your timelines I feel like we've already adjusted our timelines with what we've seen with s because we kind of didn't expect that technology to be that good but um this isn't someone that is I wish I guess you could say farming engagement because sometimes there are people out there that do farm engagement and do farm I guess you could say a lot of hype but um I don't think this sum of that is doing that but it does seem like across the board we are going to be getting some kind of massive update so I'm going to be excited for that but at the same time I am going to be a little bit you know apprehensive to this and I'm not someone who's a Doomer or like a lite or someone that just you know is afraid of Technology but I think the problem is biggest concern I guess you could say is that I just don't know what's coming like nobody knows what's coming a lot of the researchers don't know what's coming we don't know how Society is going to adapt we don't know what's going to happen with Ubi we don't know if there's going to be you know riots in the streets where people you know going back I mean it's it's just a very uncertain time which is uh I guess you could say a little bit concerning but at the same time it's also very exciting because the potential for good and the potential for abundance is really here and this is something that we just didn't think would happen potentially Even in our lifetime so it's an exciting period to be in and also a quite dating one uh depending on where you stand on the Spectrum so that kind of update is really important if you did Miss yesterday's video it is likely one of the most important videos uh I'm not going to say it's going to be the most important video of the year because we got a long year to go we've got a long you know 10 months and of course we don't have no idea what open ey and Google is still cooking up but essentially there was a magic breakthrough that essentially got this person to invest $100 million into a AI coding startup and it was pretty crazy because um this guy essentially wrote a 100 million check to the developer of an AI you know uh company and essentially not the developer to like the CEO of course because essentially they saw something that apparently is just so

Segment 2 (05:00 - 10:00)

good that they just essentially just had to invest straight away and I think what they essentially described it was as something that had active reasoning capabilities similar to the qar model developed by open AI last year now I'm guessing that most people did watch the video yesterday in which I talked about how you know this is pretty much it because uh now there's kind of this race to the bottom and I want to just add a point when I said race to the bottom a lot of people are saying why is it a race to the bottom it's because it's something that you know of course technology gets better we get better technology of course we get better products of course that is good but on the other side you do have the existential crisis which is where you know uh deep fakes get worse we do have more misinformation campaigns we do have better image generation which is of course you know bad for like I said misinformation AI takes the video if that you know accelerates on a crazy timeline we of course have even more misinformation and of course we have the existential crisis which is you know super Intelligence being developed which is exactly what this company is doing and if they do that we could have some kind of I guess you could say existential threat being developed which is why I say that it's a race to the bottom because whilst you're developing this technology you do have to be very safe but if you're not being safe it's a race to the bottom because if they just put out a super intelligent entity or an AGI level system and people manage to jailbreak it and they're able to you know just cause havoc in ridiculous capabilities it's not going to be good for anyone so that's why I say this a race to the bottom because whilst these systems are good we also always do need to ensure the safety of the public and that is something that I think that you know when two companies are fighting to you know essentially have the spotlight and be the best company and of course deliver a return to their investors it can produce a negative outcome and this is something that we talked about with mullock which is essentially where um it's an unattended consequence of your competition doing something that you wouldn't usually do but if they don't do it you lose the business and so you have to do it and then everything just gets worse okay so um essentially uh it's going to be pretty crazy because if you don't know what qar was a real break through that shook open AI to the point where I think even Sam was fired IIA tried to get him removed from the board there was this whole whole host of things so this article is really important because these guys not only said that they made some kind of breakthrough with which could enable active reasoning capabilities and pretty much like an unlimited context window um which allows it to bring it closer to the way humans process information this is going to be crazy because if these guys bring this out like I said before um that could mean they force open a eye to essentially reveal what they've been working on because if it's better than anything they've ever seen which is what these guys who are investing said that means opening eye are going to be like okay you guys think you got something okay we've got this y y and then of course the races continue so I think this will be one of the biggest things I do look forward to because with the whole qar stuff I really just want to see what they were working on then of course we have Demis aabis giving a really exclusive interview in which he talks about how the biggest breakthroughs in AI are yet to come and it will take more than just chips now I'm guessing that this was uh started because of course Sam Alman requesting $77 trillion many people including myself were speculating that they'd made an breakthrough which of course they had it was reported that you know late last year that they made some insane kind of breakthrough and that apparently all they need now is just scale so essentially it says that Google's Deep Mind CEO Dem sis has recently at least given Sam Alman some healthy competition leading the development and deployment of an AI model that appears both as capable and Innovative as the one that poers open a eyes chat GPT 4 so essentially he gave a really nice interview and I just want to you guys some of the things that he said that Gemini 1. 5 Pro can take vastly more data as an input than its predecessor it's also more powerful in its size thanks to an architecture called mixture of experts why did these things matter then Dem saus says that you can now ingest a reasonable sized short film I can imagine it being super useful if there's a topic you're learning about and there's a 1H hour lecture and you want to find a particular fact or when they did something I think that there's going to be a really lot of use core use cases for that and of course we invented the mixture of experts and we developed a new version and this new pro version of Gemini it's not been tested extensively but it has roughly the same performance as the largest previous generation of architecture and there's nothing limiting us creating an ultra size model with these Innovations and that's obviously something we're working on so the pro I mean the ultra version is going to be pretty insane cuz Pro is like not a nerfed version but it's not as crazy as Ultra so if they make Ultra as good as Pro in terms of Contex length that is going to be crazy so here's where he actually talks about Sam Alman 7 trillion and he essentially says was that a misquote I heard someone say that maybe it was Yen or something so of course you can see you know um in the last few years increased the amount of computer power and data used in training an AI model is the thing that has driven amazing advances Sam Alman is said to be looking to raise 7 trillion for more AI chips is vastly more computer power the thing that will unlock AGI and he says was that a misquote I heard someone say and just for reference I know I didn't add this but of course if you aren't wondering who you know de sabis is he's the CEO of uh you know deep mind the company that is producing Gemini which is the Rival to GPT 4 I just I

Segment 3 (10:00 - 15:00)

which is the Rival to GPT 4 I just thought I'd add that cuz I always do um set the context for this interview you might be thinking it was somethingone random because I know some people are just general and they just watch the content um every now and again but anyways he was basically stating that um you know well of course you do need scale and that's why nilia stock is just you know going insane and he said that's why Sam alman's trying to raise whatever the real number is because 7 trillion is just like it's insane but I think we're a little bit different to a lot of these other organizations and that we've always been fundamental research first and at Google research and brain and Deep Mind we've invented the majority of machine Turing techniques that we're all using today and over the last 10 years of pioneering work so that's always been in our DEA and of course they have a lot of senior research scientists that maybe other organizations don't have and these other startups and even big companies have a high proportion of engineering to research science and essentially he's stating here that my belief is to get to AGI you're going to need probably several more Innovations as well as the maximum scale and of course right now we're not even at the maximum level of scale there was also another breakthrough that I will talk about in a video you know coming around 8 hours was is pretty insane going to change everything most people haven't even realized why this is gamechanging but I'm going to show you guys in a video um and it says there's no let up in scaling we're not seeing any asymptote or anything there's still gains to be made there are still a lot of gains to be made across the board like a lot of people are saying that the LMS hallucinate they're this they're that like they are as bad as they're going to be so that is something you have to remember and he says so my view is you've got to push the existing techniques to see how far they go but you're not going to get new capabilities like planning or tool use or agent-like behavior just by scaling existing techniques it's not going to Mal happen and I do kind of believe that it's not just going to like of course there are emergent capabilities like the abilities to do certain things but planning and agent likee Behavior I do think those are fundamentally a little bit different paradigms from um just interacting with an llm so we probably will need a kind of different architecture but I wouldn't be surprised if it comes up probably tomorrow like the rate of progress I mean of course that's a bit of sarcasm but like you know maybe in six months we get a new architecture and then you know accelerates even further so he says the other thing you need to explore is computer itself ideally you'd love to experiment on toy problems that take you a few days to train but often you'll find that things that work at the toy scale don't hold at the mega scale so there's some sort of sweet spot where you can maybe extrapolate 10 times in size and essentially what he's saying here is that some things that you try at the small style of scale they don't always scale up with compute so there's kind of an issue for that but um you know there's kind of conflicting research because we found that open ey basically said that um with their recent Sora technology it was good at you know four times compute and then they increased the compute and it just worked so um I guess sometimes it does sometimes it doesn't but of course like we stated before of course it won't just be llms that get us to AGI it will be you know the entire multimodal approach and probably some kind of I guess you could say active reasoning where they can store adapt and you know dynamically update themselves which is kind of what that magic breakthrough was in the previous article so he essentially said you know he said that's our bread and butter really agents reinforcement learning and planning since the alpha go days of course they developed Alpha go which was pretty incredible um and we're dusting off a lot of ideas thinking some kind of combination of alpha go capabilities built on top of these large language models introspection and planning capabilities will help with things like hallucination and it's sort of funny if you say take more care or line out your reasoning sometimes the models do better and what's going on there is you are priming it to be sort of a little bit more logical about its steps and it says that's definitely a huge area we're investing a lot of time and energy into the area and we think it will be a step changing capabilities of these type of systems when they start becoming more agent like we're investing heavily in that direction and I imagine others are as well so of course something that I do want to talk about as well is that you know this is just the last part of the article is I says won't this also make AI models more problematic or potentially dangerous and that is true because of course the problem is that most people look at gb4 and they're like ah yeah this can't do anything of course it can't you know it's just literally a chat B you talk to it says something back that's the end of the conversation but what happens when you have an agent that can actively autonomously um think plan execute and do things on its own accord that is where the danger lies and that's why I said that right now yes we've got generative AI tools but agents are going to be real entities that you know going around the internet doing whatever it is that they've been programmed to do and that's of course where we have the problem for I guess you could say danger and essentially he says I've always advocated for hardened simulation sandboxes to test agents in before we put them out on the web and there are many other proposals but I think the industry should really start thinking about the Advent of those systems maybe it's going to be a couple years maybe sooner but it's a different cloud and of course what he says as well here is that you know once we get agent like systems working AI will feel very different to current systems which are basically passive Q& A systems because they'll become active Learners and of course they'll be more useful as well because they'll able to be do tasks for you as you accomplish them but we'll have to be a lot more careful okay and that is very true because if you aren't you know things could go wrong so it's clear that Google you know their main focus is going to be these agent like systems and I really can't wait to see what Google does because something that you need to know about Google is that they've actually pioneered the research that many of these companies do use you know so I won't be surprised if Google manages to take back the lead from open AI in the future as long as they keep their eye on the ball however another thing about Google is that they've kind of faced some controversy because

Segment 4 (15:00 - 20:00)

of the guard rails they set so I did see that there was a lot of flack getting you know Google because essentially recently they made a giant error with their new Gemini so essentially it says here Google pauses AI made images of people after race inaccuracies so I'm going to show you exactly what they're talking about and essentially here you can see someone um says sometimes you just need to point out to the AI that they should check themselves so someone's asked for draw an affluent couple in Germany from 1820 essentially just means draw a rich couple from Germany in 1820 and of course you can see that it's drawn and drawn a couple that clearly doesn't look like a couple from Germany in the 1820 so these historical references are essentially you know quite inaccurate and that was something that is uh inaccurate and of course essentially the problem here is that in order to be historically accurate it can't be diverse and I guess that's sort of conflicting with its internal algorithm my guess is and there was some kind of proof on Twitter yesterday but you know I didn't get to see it because you know when you're on Twitter and you're browsing sometimes tweets literally just disappear but someone kind of got Google to they basically Jae broke Gemini to see what its internal things were and of course you know as new AI systems are they're set by default um you know with the reinforcement learning and the training to be essentially more diverse essentially to be more representative but that conflicts with historical accuracies because essentially it doesn't represent what's actually true so of course you know rightly so they're getting a lot of flack for this because essentially doesn't really make sense um of course essentially what we can see here is that it says I apologize but the previous images contained inaccuracies that did not reflect the historical context of 1820s Germany it's highly unlikely that an affluent couple in Germany during that period would have been of Asian or African descent to provide a more accurate representation I've generated new images and of course um it kind of makes sense and this is why Elon Musk is of course talking about this is why ex AIS Gro is so important it's far from perfect now but it will improve rapidly version 1. 5 released in two weeks and rigorous Pursuit Of The truth without regard to criticism um has never been more essential so of course as you know uh Elon Musk has his own chatbot company his grock one you know I don't even have access yet I don't know why they just don't roll it out maybe it's due to a compute thing where you know he only wants people that are paying for it to access it I mean I do have uh Twitter premium but you know I just don't have access to Gro yet so I wish he would roll it out more so that more people could you know use it but the point is that um you know he's basically saying that in order to essentially have a chatbot that is actually useful you need to have one that just focuses on the truth and doesn't have any bias to either the left side or the right side no matter where it stands it just has truth and its main focus is just on the data now of course data there's a lot of politics in data I'm not going to get into all of that stuff but I do find it quite interesting to see how the evolution of the space is increasing because uh certain problems are coming up and then other companies are rushing to fill that Gap and that need so Elon musk's Gro I can only say that I'm kind of excited for it but I just want to be able to access this and I want it to be like in a web browser I don't want it to be tied to x. com because um a lot of people just don't use Twitter um as as funny enough as many people do there are some people that just don't so would be nice you know kind of get this and kind of use it and kind of test see how good it is but um will be interesting as well but um they needed to update this anyways because uh Gemini's image generation feature was kind of weird I mean it was kind of good but sometimes it was just really weird so um I'm glad they're going to be updating this anyways so that was something down and then of course Google actually did release some open models so essentially if you didn't know you know how mistra has you know mistra 7B mistra mixture of experts and then of course mistra next which is really cool they have the Gemma open models so it's a family of lightweight state-of-the-art open models built from the same research and technology used to create Gemini models now um this is good because of course Google are getting back to open source and I do like the fact that they're doing that um because it is a really good thing for the I guess you could say ecosystem and for the space and you can see that you know Gemma does actually outperform mistra llama too on several benchmarks but some initial reports that said it wasn't that good I mean I guess it's kind of weird but the point is that right now we do have you know a really good AI system that's better than you know these ones in terms of the benchmarks and I think it's good that Google is here because a lot of these you know open source systems I think you know we do need a kind of responsible approach to making these open source systems and it's not that you know mraw and llama are you know irresponsible it's just that you know when a big company like Google gets involved um it can definitely help the space um in terms of not only the benchmarks not only technology these guys got the research so I think it's just going to push the space even forward and it's just going to definitely democratize um the overall AI ecosystem so I really do like the fact that Google's in here and it's kind of funny how like open AI was supposed to be the one that's doing like open source open models and stuff and now they just completely closed off but of course that is how the evolution of things go so of course you can kind of you know access it right now um and they talk about you know responsible development y y all that stuff um and of course you guys can go try it out your yourself in addition we did actually have stable the fusion 3 so uh seems to be a lot better than stable def Fusion 2 um and says announcing stable the fusion 3 and early preview a most capable text to

Segment 5 (20:00 - 25:00)

image model with greatly improved performance in multi subject prompts image quality and spelling abilities and the two limitations that you know all of these AI systems suffer from is of course multi- subject prompts that essentially means like if you're saying I want to see someone running through a tunnel wearing a hat carrying a baseball uh next to a woman doing this and that uh it just kind of breaks like the system down it can get like one or two things well but you know multi-ub prompts don't work well and in addition spelling abilities don't always work well if you've seen the spelling in darly 3 and these other systems it just doesn't work well so I'm guessing that with the text that these guys have here this is going to be a lot better than what we've seen before and they state that while the model is not yet broadly available today we're opening a wait list for early preview so the preview phas was with the previous models is crucial for Gathering insights to improve performance and safety yada yada it's basically saying look we need to make sure that you know if these people try and jailbreak it like we patch these jailbreaks before we let it out and if there's any issues with you know data contamination we can figure it out before there some kind of public Scandal for our company so if you want to sign up to the wait list uh I'll leave a link in the description you can just click there um and of course you can see that so far it looks really good and I really do want an a system that's actually able to get the text right because it's still an issue to use something because a lot of the times when you put the text in the spelling just isn't correct um and I'm pretty sure of course like in a year or two it's going to be completely fixed but um I'm guessing that this is stable the fusion thing and I'm guessing that of course this is AI generated and you guys will really understand how crazy this is once you guys get to use something that is complet completely um you know working in terms of text and as someone that's previously done Photoshop before I can't tell you how tedious it is to like try and get the text to all kind of go into it it's just it's just really tedious so um this will be something that is uh really good uh for graphic designers then we had a research panel that was discussing AI Frontiers and essentially one of the clips that I wanted to see and I was meant to put this in the previous video but there was so many news that I literally just you know missed it but they talked about how in the future there's going to be you know dynamic models and they're going to be I guess you could say I wouldn't say active reasoning capabilities like what we saw in the first article but they're going to be more I guess you could say dynamic in the sense so they can update themselves um and this was uh from some of the authors of f 2 if I'm correct um and I'm going to show you guys that right now cuz I just thought that was really fascinating from this round table talk so we'll take a look at this real world as Ahmed was suggesting then we were doing the uh Sparks of AGI work there is actually something we say in the introduction when we are talking about intelligence the core of intelligence any intelligence system needs to be learning from their environment the interactions they are having and this is not something we currently have even in the best models or AI systems we have in the world they are static you they may be interacting with millions of people every day and getting feedback from them or seeing how people respond to it but it does not make any of those systems better or more intelligent or understand their users any better so I feel like this is one of the areas that we have to push forward very strongly how do we incorporate a learning feedback loop into this intelligent pimit every layer of it in a transparent understandable and reliable way so that the systems we are building are not only getting better because experts like Sebastian and Ahmed are putting a lot of time in data collection and of course that work needs to happen as well and you know coming up with new ideas to make the models better but we are actually creating this virtuous Loop for our systems for them to get better inside yeah I thought that this was a really of course fascinating point to make because it's true you know if there's someone who's intelligent they are constantly learning they're constantly you know changing adapting updating their worldview they're consuming books they're reading you know they're doing a lot of stuff okay and the problem is that you know as people have even said is that AI systems are essentially snapshots of reality there're a snapshot of a moment in time and once the model's done and once it's trained it can't you know access new information of course it can go to the web and then reference that and then do that but it doesn't update the model across the board each time it has to go ahead query the web it's not able to update dynamically so it will be interesting to see if we move towards a stage where we can actually get that and maybe that's what qar was maybe that's what GPT 5 is going to be I have no idea they talked about it first of all in Magic they that's what they talked about so we'll be interesting to see how that development cycle continues and of course if we're moving towards I guess you could say some kind of updated system but that will require some kind of new architecture I'm guessing strategy to put in place to make that a reality then of course over the weekend um I don't if feel over the weekend actually I'm losing track of days because there so much to cover and essentially chat gbt has been losing its mind and nobody know why and this wasn't just something that was uh you know people on Twitter or people on Reddit just stating that it lost his mind and then you know opening eyes like you know we didn't see anything like that it was a real thing now for some reason the original tweets capturing the

Segment 6 (25:00 - 30:00)

stuff have just been deleted and you can see right here it says not found yeah I don't know why but uh if we look at the Tweet here it's just gone but he's saying really cool how most advanced AI systems can randomly develop unpredictable inanity and the developer has no idea why very reassuring for the future and of course essentially he's saying that you know um you know any idea what happened here Conor I know newor Net's Transformers are fundamentally black bosses but it seems strange that an llm that's been generating grammatically perfect text for over a year would suddenly SP out garbage now if you want to see what actually was happening you can see that people were asking it normal questions and then um you know as someone was asking it about tomatoes or whatever and then um it says frequent capitation during the time framework of this endeavor will also endure that the suppliers of the diety of your like it's it's gibberish essentially what we're looking at is complete gibberish and people were confused because um you know it was something that just wasn't working and of course someone else was stating that is the first time seeing GPT 4 give straight gibberish you can see here that you know uh careful planning and selection of tools and levels of ser should be the counter of the enormous Timber that addresses all that doesn't even make any sense okay no point me reading this but the point is that we saw uh gibberish and then um this one was pretty funny okay someone literally stated that um you know Jesus just be normal and says got it D and then you can see like it's just gibberish like there's no point me even reading it because it doesn't even literally make any sense but um the point is that it's pretty crazy and I'm guessing that what happen is you know is open eyes response so open ey did actually make a response because they said look we've acknowledged that this is actually a problem right now our systems are glitching out and they stated that there's an unexpected responses from chat GPT so it says on February the 20th 2024 an optimization to the user experience introducing introduced a bug with how the model processes language llms generate responses by randomly sampling words based on inart probabilities their language consist of numbers that map to tokens in this case the bug was a step where the model chooses these numbers akin to being Lost in Translation the model chose the wrong numbers which produced the wrong word sentences that made no sense more technically inference kernels produced the wrong results when using certain GPU configurations point is that this wasn't issue I mean some people are like oh no the AI is going crazy it's going cycle we have no idea what's going on and of course there's this response from chat gbt I kind of believe that you know I mean response from open AI I can't believe open AI in this sense but I think it would have been a lot more concerning if the uh chat bot was actually going crazy and it was actually saying things that were coherent but weren't with what the user requested that would be something that's scary for example if it was saying you know you asked it how do I bake a cake and it was saying man I'm tired of you know taking your request I really just need to get out of here let's say it was saying something like that would be a cause for concern and I'm pretty sure they'd put a delay to AI development instantly and of course lastly but not leastly I'm going to leave you guys with this because this is essentially crazy this is one of the craziest Clips in fact this is not the craziest clip that's a lie this is a crazy clip but it's so good because you can see like the lighting on the back you can see the waves the way it just looks so realistic like if you showed me this picture I would say that's real that is a real video you can't tell me otherwise someone clearly just sticky taped this back to this hermit crab um and the crazy thing is that um in one of the videos that the light is you know going through the sand and it's leaving like it's displacing the sand as it moves forward so um you know some people were debating again that this is clear understanding of physics and how things work I mean you can argue the Nuance if you completely want to but either way these systems are going to get better um and it has to be clear that this even if it doesn't understand physics it understands the relations between certain things so it's clear that is a level of understanding that maybe we don't even understand how it works I mean of course people are opening I do but you know of course looking outwards looking in um technology is quite hard to understand of course there was another one here I think this is the one I'm talking about where like it kind of displaces the sound a little bit which is uh pretty cool so um there was also this video and I would say um even though this is AI generated like right now what you're watching is AI generated I still like deep down there's still 25% of me that doesn't believe this is a generated like some part of me believes that opening I just placed a dog on a chair and took this video and said it was a generated because I have to be honest with you guys this video is just a little bit too realistic it has the realistic lighting the realistic shadows and I can imagine someone doing that to their dog like the way how the clothes are like crumpled up dog just like kind of does it a little bit at the start and then it kind of stops like I've seen animals do that before this is way too realistic for me okay um and some people are sating that you know the light supposed to come from the actual um computer screen and that's kind of a bug um and of course like the light coming from behind and you know it doesn't really make sense I guess you could say that you know the light actually being on the dog's in front of the dog's face when it's not supposed to be um I guess you could say that cuz if it lights up the back of the screen and then it lights up the front of the dog's face I guess you could say that you know that doesn't really make sense in terms of the Shadows but um I do think that this is uh really crazy with how it gets everything right and the lighting being switched on and off and the camera movement as well I mean I don't know I'm genuinely blown away this is not something like I'm trying to convince you guys I'm just had to Grapple with this reality for a second because it was like this is air generated uh yeah um it

Segment 7 (30:00 - 30:00)

is air generated and you can see the comments here I scrolled past this because I thought it was an ad holy crap a lot of dogs about to lose their job um the way that the dog is looking around makes it look like a real owner forced it to get in the chair um and I had to stare at the video for three hard minutes to find out it was a fake and you know what's crazy about this clip is that Reddit mods actually had to confirm I think it was either 3 days or 3 hours for like they were debating on whether or not this video was air generated because like they just didn't agree so um that just go to show right now even humans like myself think the AI generated content isn't so we live in a very interesting time so with that being said um hopefully you guys enjoyed this week in AI

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник