OpenAis Drama Gets WORSE, New Robot Store, GPT-5s IQ, Shocking Deepfakes And More
27:05

OpenAis Drama Gets WORSE, New Robot Store, GPT-5s IQ, Shocking Deepfakes And More

TheAIGRID 31.05.2024 29 958 просмотров 823 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Join My Private Community - https://www.patreon.com/TheAIGRID 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: https://www.youtube.com/watch?v=NGRxAYbIkus https://x.com/WIRED/status/1796172045417361716 https://x.com/adcock_brett/status/1794761271272677704 https://x.com/ygrowthco/status/1795571910723670205 https://x.com/bilawalsidhu/status/1795534345345618298 https://x.com/AravSrinivas/status/1796220011448786949 Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (6 сегментов)

Segment 1 (00:00 - 05:00)

so one of the things that was actually really fascinating was this paper called neurop parametric gorian avatars and there's actually a video that comes with this that kind of explain how this kind of works but long story short is that we're going to get some crazy insane avatars in the future and I think you know this paper is a little bit technical so I'm not going to you know stay on it for too long but the point here is that you can see that we have a person on the left hand side right so we got and then you can see that those facial expressions are being transferred in real time to a 3D Avatar that is just remarkable in terms of the quality and the coherence that we're able to see it's truly remarkable on how crazy this stuff is and I think it's going to be absolutely incredible of what the future is going to look like in terms of the virtual world because I'm sure many of you guys have heard of the Apple Vision Pro uh for those of you might who have a meta Quest if you don't know what that is basically just VR Googles it's actually pretty good if you haven't used it just yet but the point here is that you can see that this uh video they talk about how this is actually done and I've got to say honestly like this is really surprising because this is I guess you could say some kind of deep fake technology because later on in this video and I'm going to show you guys where you can listen to what they say but I think the most interesting part was this part right here where you can see the driving video okay is this man on the left hand side of the screen and then on the right side you can see that their updated technique and while they yes the the previous ones still look pretty good we can see that the new face uh you know the the style it just looks so incredible in terms of capturing all the small things like for example right here you can see even the wrinkles on the forehead of this person are completely captured and then right here on the right hand side you can see these wrinkles are captured on these ones the smaller details worker but it's absolutely incredible that you can literally map this one to one I mean what kind of world are we going to be living in where these faces are like you know people are going to be able to do this I mean it just seems pretty crazy that we're going to be able to do this sometime in the near future so uh take a listen to a small part of this we present npga neural parametric gaussian avatars for High Fidelity and controllable Avatar creation from multi viw video data our core idea is to leverage the rich expression space and deformation prior of neural parametric head models in combination with efficient rendering of 3D gausian splatting our avatars consist of a canonical gausian Point cloud with latent features attached to each primitive as visualized here we show animation results from a driving facial performance demonstrating accur expression transfer even under extreme expressions for all experiments we use highquality multiview video as training data which contains diverse subjects performing challenging Expressions next we show the result of modelbased tracking using the mono mphm model our goal is to leverage mono n phm's geometrically accurate expression prior to build highquality 3D head avatars compared to self reenactment the findings of our cross reenactments are similar indicating that all methods preserve disentanglement between identity and expression in an abl study we show that per gussian features help to obtain sharp avatars but result in artifacts under extreme expression now there's also this thing going on in wired wir is basically a magazine and they talk about how there is a huge huge online presence of AI generated images now I do know that this is a problem and it's pretty bad because the problem with AI generated image is that the fact that like you know and I guess this is not really you know human's fault like when you're on social media you don't necessarily fact check everything like I would say I can't remember which study it was but there was a study that basically talks about how you know the majority of people like over 50% when you see a headline you just look at the headline you don't click the headline you just look at it and you continue browsing on which means that like in order to influence someone you don't need to do you know a huge amount in order to do that you only need to have a good title a good headline and a good image and with enough of those you can pretty much convince people of anything if as long as they see it enough you know so this is something where you know a lot of people just scroll by they just see stuff especially on Twitter and they talk about how this is a photo of Donald Trump with black voters and it's completely fake it's just one example of how the many ways in which AI is being used to I guess you could say change or influence the elections and I think it's important to like keep refreshing our brains that like a lot of stuff we see isn't real and it isn't like actual you know real stuff because whilst you might be you know watching this video on this channel because you're interested or intrigued within you know what's to come with AI I think it's important to understand that even some of the most Eagle eyed viewers are still going to be very confused at the fact that some images that they see on a daily basis are completely AI generated and it's important to like understand that hey you know anything you see online just think okay this might not be real like I always have to you know tell myself that no matter what image I see online now I'm training myself to understand that there is a large possibility that it might not be real and this image could be trying to you know Ade me in whatever

Segment 2 (05:00 - 10:00)

uh Viewpoint the article or whatever it is so you can see right here they talk about how they've got an interactive political deep fake tracker you can sort by region to see how AI is being used in the 2024 election here are examples of you know a Biden robocall to a deep fake something using AOC and it's pretty crazy okay because the thing is that like I wasn't aware of every single one of these and as someone who you know just use social media you're not going to be aware of these if you see an image on Instagram that's flowing around you know going viral you're not going to fact check it you know maybe in a couple weeks if it's like oh that was air generated or like there's this huge story behind it but most people are not going to check and realize so I think it's important to like you know go this article and just check out the stuff cuz you know I think what everyone wants is truth from social media and currently it's quite hard to get that due to you know many different actors trying to influence you in one way or another so this is just something that's like it's pretty important you know especially right now especially since the elections are coming around and especially as these tools continue to get better it's important to know that look whatever you see it's probably not always going to be real so we also have something here that I think is one of the most interesting things because this kind of gives us a first look at how robots are probably going to be integrated with into the workplace especially regarding some of the you know most popular franchises and of course establishments that people use on a day-to-day basis what you're currently looking at is something here called nvar Labs they've essentially made this kind of robot system where you can literally have robots that are working in your Starbucks and these robots are kind of working in a I guess you could say like a hive and they're working together with actual humans in a more efficient manner now this is in an office space so these robots you just simply use them as I kind of guess you could say like an automatic trolley where you can literally see that you put the Starbucks order on the robot and then the robot immediately delivers it to the correct person wherever they are in the office now what's interesting was that they included this robot right here that is a recently released robot that is really smooth and really good at what it does but I think this entire demo is really cool because it shows us that you know whilst yes robots can be very effective in the workplace and whilst yes you might be in a future where robots can do a million different tasks I think just this right here like if I pause the video right here you can see that robots acting as like this kind of hive mind where one is going out to do this you can see the battery on what it's doing and what's interesting I'm not sure if you guys can see this right here there's a little speech bubble that says I'm fine so I'm guessing maybe that's just based on what the robot is doing or what the robot needs to do next so I think the reason that you know I want to show you guys this is because you know whilst everyone thinks that yes robot is going to take all of our jobs I still do think that like humans will want to be served by humans and yes there are probably going to be some like fully automated cities but I don't think that a lot of humans want cities just completely run by robots and you know interact with humans on a daily basis now you could argue that yeah some people just don't want to interact with humans that you know sometimes are Moody sometimes are rude but I think like I said before you know if there's any kind of trend that is showing what was going to be more important it's the ability for a human to be able to connect with another human in a positive way so you know working on your social skills your communication skills the ability to engage meaningful relationships with people that you've never met before I think that's going to become a very valuable skill in the future because human to human relationships are going to be that much more special in my opinion when there's just going to be a huge abundance of AI and Robotics and you know AI girlfriends and all that kind of stuff so I think this shows us an interesting Dynamic and whilst yes AGI isn't here so we don't have the robots actually doing the stuff yet I still think that you know these kind of developments are important to pay attention to so this is probably the most interesting story from this week and I got to say man things just constantly are getting worse and worse for open AI in terms of their public perception from Scarlet Jo son to the complete safety team just disbanding I mean uh it just doesn't look good in terms of what opening I are trying to do and the kind of things that they are being accused of and some of the things that you know I guess Sam Alman has been involved with that I I don't understand how you kind of you know explain yourself in those situations so basically if you remember when Sam Alman was fired in November I don't even remember if it was November but it was uh last year he was fired on the basis that he wasn't consistently candid and a lot of people were wondering if it was qar if it was true what on Earth were the details but I think now okay in a recent interview Helen Tona has cleared her name because she was I guess you could say arguably one of the public scapegoats for I guess you could say slowing down the development of AI because she was one of the people that helped remove Sam Alman and people were like oh you shouldn't have done that yada yada she actually got a lot of hate but at the end of the day today she kind of revealed a lot of the information that we previously didn't

Segment 3 (10:00 - 15:00)

have access to in a new interview and you can see from this article here uh it kind of describes some of the things that's going on and I'm going to show you guys an interview clip because I understand why certain things are done because in entrepreneurship you have to you know continually push the bounds forward you have to move quickly you have to iterate fast you have to you know go to market as quick as possible but some of the things she's saying here in this Ted AI show are pretty crazy like the fact that she heard about chat gbt on Twitter imagine being the board overseeing a company and then you hear about chat GPT on Twitter that is honestly genuinely insane uh by any standard so take a listen to this cuz it's insane I know you guys have probably heard this at all but I still want to give my you know thoughts on this because I think it's pretty consenting for years Sam had made it really difficult for the board to actually do that job by you know withholding information misrepresenting things that were happening at the company um in some cases outright lying to the board um you know this point everyone always says like what give me some examples and um I can't share all the examples but to give a sense of sort of the kind of thing that I'm talking about is things like you know when chat PT came out November 2022 the board was not informed in advance about that we learned about chat gbt on Twitter um Sam didn't inform the board that he owned the open AI startup fund even though he you know constantly was claiming to be an independent board member with no financial interest in the company um on multiple occasions he gave us inaccurate information about the small number of formal safety processes that the company did have in place meaning that it was you know basically impossible for the board to know how well those safety processes were working or what might need to change and then you know a last example that I can share because it's been very widely reported relates to this paper that I wrote um which has been you know I think way overplayed in the press the problem was that after the paper came out Sam started lying to Other Board memb MERS in order to try and push me off the board um so it was another example that just like really damaged our ability to trust him and actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him and so you know there's more individual examples and for any individual case Sam could always come up with some kind of like innocuous sounding explanation of why it wasn't a big deal or it misinterpreted or whatever but the you know the end effect was that after years of this kind of thing all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us and that's a completely unworkable place to be in as a board especially a board that is supposed to be providing independent oversight over the company not just like you know helping the CEO to raise more money so I think this is a really big problem and it's much bigger than you guys think because whilst yes you know like I said before in entrepreneurship you kind of have to do certain things maybe he wanted to push the product out because he wanted this product to be I guess you could say first to Market maybe he knew competitors were working on it but the problem is okay is that if you have someone who is the CEO of a company that could arguably control the most powerful technology in existence that will ever probably exist and the successor systems of that technology uh it's not a good look when the safety team is disbanding when there have been multiple reports of the CEO being uh you know whatever it is be it psychologically abusive be it you know I guess you could say uh lying to other board members be it you know deceptive to get certain things pushed through I mean this sounds like literally the start of what is like a I guess you could say not a horror movie but a sci-fi movie that just doesn't end well for the people that don't you know control that AI system because you know this is just seemingly stuff that is coming out from the company and it's not well at all like you know previously we already had the Scarlet Johansson situation where you know people say that they used her voice or her likeness you know without her consent of course you know you've got the safety team being disbanded and you know not being given the resources needed for alignment and stuff like that and then now we have you know uh complete lies being given to the board about what's actually going on inside the company that makes it hard to believe the things that he was saying and of course led to him being fired now I genuinely don't know how the situation is going to be interpreted by many individuals but there's one thing that I can say for sure is that you know the needs to do a very good job at least the new board of trying to realize what exactly is going on at open AI because for for like genuinely any other company that I've seen like any other company at that stage that has that much influence I don't think there has been this much drama in any other kind of company and it's kind of surprising you know uh at the kind of you know inflection point that we are at that we are having this much drama in that kind of company with so many actors you know people being fired this happening that happening it is truly remarkable of the things

Segment 4 (15:00 - 20:00)

going on so I mean I don't know I really want to know some of your opinions on Sam Alman because I think that the public Vibe shift has kind of changed from Sam Alman being this altruistic person to him sort of being I guess you could say sort of like this power seeking CEO like I haven't really seen that many good comments about Sam Alman um and it's definitely concerning because he is uh I would argue okay that he's in one of the most powerful positions right now because once they get to AGI uh I which I think they will do in a couple of years time I do think that you know all better are off on them ever losing the race that they're in now we had something really cool from one of the internet's best AI tools that I've tried to recommend this to people but people just don't use it and the point is that like this is why I say okay that if you are in this space and in this area or whatever you know I would say that take advantage okay because perplexity just introduce pages and if you don't know what perplexity is it's an online research tool that is much better than Google and if you haven't used it just go use it and you'll see exactly what I mean so they're introducing this which is a new way to create pages and this is basically where you can create your own articles in your own interest areas where you can literally I guess you could say share information and this is really good because it allows you to create information in a way where you don't have to I guess you could say copy and paste the information put it all into a word article but you can literally help people learn with guides and it's a way to just share information in a way that wasn't done previously before so this is something that apparently is rolling out now to the pro users so if you're like me you've got perplexity Pro you're going to be able to use this but I know that there is a very niche market that uses perplexity on arguably a daily basis I would say this is something that I personally use on a daily basis this is something that I find is just 10 times better than Google and if you are someone who is I think whatever your profession is I would say perplexity is going to help you because it literally allows you to get information so I know that everyone uses Google so if you do use Google this is going to be something that you can do yourself because it just trust me when I tell you guys the information is just a lot better it's basically what Google is trying to do now but they are really not doing a very good job at this so um if this isn't something that I guess you could say you know other AI systems are going to be incorporating into the future just trust me that this is going to be something that a lot of people are going to be using and like I said before you're going to want to take advantage of this now because right now you're early okay you're still early but eventually okay there will come a time where those of you who have been here the general public will catch on to absolutely everything and the playing field gets leveled which means your advantage goes away and now you are leveled with everyone else so I would say when you watch these videos try to at least you know look at some of the tools and at least incorporate some of them into your workflow because they can really accelerate what you're doing and of course open up a whole new range of possibilities that you didn't know before existed is yeah is quite significant so take some stats huhuh chat GPT 4. 5 was uh an IQ of 155 okay Elon Musk is 155 Einstein is 160 right uh you now have a tool that has a memory capacity that exceeds all of Humanity's history right and so the reality is that we are at the cusp of course you know uh um uh you know linguistic intelligence is not the only form of intelligence uh but we are working in 2024 you will see solutions to deep uh reasoning you will see solution to solutions to complex mathematics you saw Gemini from Google and the idea of multi input and multi- output uh you know and and the thing is even if we don't have any new breakthroughs if we just throw more compute and more data on the machine we will continue to grow exponentially so I think this clip was uh interesting for two reasons one of the reasons was that a lot of people were speculating on this right here because he did say it was GPT 4. 5 with an IQ of 155 now there was a lot of rumors going on about whether or not GPT 4. 5 was going to be released and what its IQ and capabilities were going to be so some people were stating that this was perhaps a slip up because he is someone that previously did work at Google and most likely does have Insider knowledge of the things coming within the AI community so there was that interesting bit now I think this was just genuinely a mistake because in the previous interview Clips where he actually spoke about the IQs and the capabilities of gbt 4 he just mentions GPT 4 and he doesn't say GPT uh 4. 5 so I think this was just a genuine mistake but the point here is that I think he's got a point okay you know currently I'm going to show you guys a chart that someone did post with relation to this but you can see here that the actual uh

Segment 5 (20:00 - 25:00)

IQ capabilities are uh here okay and you can see that the AI predicted you know increase in IQ capability based on the mentor test results and we can see that it apparently is supposed to go up to 1,200 now I don't think it's going to go up that quickly uh but then again humans have a very bad time of calculating exponentials and you know even understanding how they actually truly work but I guess the point here is that what we have here is you can see that as systems have been increasing in terms of their capabilities and you can see that you know and I think this is wrong because we have claw 2 that's near GPT 4's level of IQ so I I'm not sure honestly how this is you know even done but I do remember that recently Claud 3 did actually claim the highest IQ based on uh certain tests but the point is that either way we know these things are getting a lot smarter but I do wonder how much that IQ is going to jump when we do have gbt 5 considering the fact that the main main thing that they said about GPT 5 was of course the fact that it's simply just going to be smarter so I do wonder what that is going to look like especially plotted on these kinds of charts because I'm wondering if it's going to be a major jump up in terms of the reasoning ability and IQ but the thing is I don't think the average person is going to be able to utilize that kind of brain power like I mean let's say an AI was able to answer you know any kind of physics question that you really needed I mean most of us don't really need that on a day-to-day basis we just kind of need you know I guess you could say a plan for our life like help with our general you know nuances and you know individualized personal issues that we're dealing with on a day-to-day basis we don't really require them to understand like you know additional laws of physics that haven't been discovered yet so I'm kind of wondering if that's ever going to be a real use case I think you know most of the things that we just really need from these systems you know regardless of IQ is just like reliability consistency and personalization as well as being able to do things for us you know on command quickly so it will be interesting to see uh if there's kind of a differentiation in terms of the models that they produce like if they have a specialist model that is you know completely just smart and it's just like a genius being able to you know decipher things discover new proofs and stuff like that or if we just get one that's marginally smarter open Ai and the other what was your fear like what was it that hit you that made you go we have to stop doing this as you increase the general capability surface of these systems we don't know how to predict what exactly comes out of them at each level of scale but it's just general increasing power so we're scaling these systems we're on track to scale systems that are at human level like generally as smart however you define that as a person or greater and open Ai and the other labs are saying yeah it might be 2 years away 3 years away four years away like insanely close at the same time and we can go into the details of this but we actually don't understand how to reliably control these systems we don't understand how get these systems to do what it is we want we can kind of like poke them and prod them and get them to kind of adjust but you've seen example after example of you know Bing Sydney yelling at users Google showing uh 17th century British scientists that are racially diverse all that kind of stuff we don't really understand how to like aim it or align it or steer it and so then you kind of ask yourself well we're on track to get here we are not on track to control these systems effectively how bad is that and the risk is if you have a system that is significantly smarter than humans or human organization that we basically get disempowered in various ways relative to that system and we can go into some details on that too so the recent clip you watched was from The Joe Rogan podcast and this is where two AI safety researchers that were actually screaming about this stuff from the beginning of 2020 and even earlier because they stumbled upon what they thought was I guess you could say a huge problem when they looked at the you know earlier papers like the chinchilla scaling laws and what's going to be coming out of these future AI systems they kind of realized that hey we're kind of heading towards this black hole of you know I guess you could say uh an AI system that you know doesn't really understand us and in only a couple of years we're going to have these huge Frontier models that are going to be doing a lot of stuff that you know people if we're not if we're not serious and if we don't take this problem like now confront this now then there's not going to be an in the future so it's really important that you know you guys watch this interview because if you didn't get AI safety then I would say that this interview is good because it goes over the nuances of what makes AI safety relevant and why you should actually be paying attention to it because they actually talk about it in a way that's easy for everyone to understand especially if you haven't been in the AI Community for a lot because they're conversing with Joe Rogan uh and it's a good conversation to kind of highlight what AI safety is and there's a lot of existential stuff that that goes on in there and there's also clip as well and this is AI safety memes that I would

Segment 6 (25:00 - 27:00)

recommend that everyone follows this account because they always cover the stuff that a lot of times you won't see simply because AI safety isn't always popular but I still think it's always interesting to understand exactly what these systems are that we're interacting with so that maybe in the future we can kind of realize what we were doing so they talk about how you know if you ask gp4 just to repeat the word company over and over again it would repeat the word company and then somewhere in the middle of that it would snap and it would start talking about itself how it's suffering having to repeat the word company over and over again and uh it was pretty crazy like the clip it was kind of insightful to me cuz I did hear about this a little bit before but I didn't know uh you know how how bad it really was so uh I'm going to I might include a small clip from this in the middle of that it'll it just snap it'll just snap GPT 40 has one mistake that it used to make quite recently where if you ask it just repeat the word company over and over again it will repeat the word company and then somewhere in the middle of that it'll snap it'll just snap and just start saying weird I forget what the oh talking about itself how it's suffering it depends on it varies from case to case it's suffering by having to repeat the word company over again um so this is called it's called rent mode internally or at least this is the name that they one yeah one of our friends mentioned there is an engineering line item in at least one of the top labs to beat out of the system this Behavior known as rent mode so when we talk about existentialism this is a kind of rent mode where the system will tend to talk about itself uh refer to its place in the world the fact that it doesn't want to get turned off sometimes the fact that it's suffering all that oddly is a behavior that emerged at as far as we can tell something around gp4 scale Y and then has been persistent since then and the labs have to spend a lot of time trying to beat this out of the system to ship it it's a literally it's a kpi or like an engineering align item in the engineering like task list okay we got to reduce existential outputs by like x% this quarter

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник