Links From Todays Video:
https://x.com/ArtificialAnlys/status/1850587843837771900/photo/1
https://x.com/tsarnick/status/1851360143721869672 https://x.com/wzeeeeen/status/1850917586067959986
https://x.com/tsarnick/status/1849181032941232624
https://x.com/tsarnick/status/1849187976284377498
https://x.com/tsarnick/status/1849228425749504103
https://x.com/omarsar0/status/1849139985712369907/photo/1
https://x.com/tsarnick/status/1851370497235447860
https://x.com/slow_developer/status/1849518723474063504
https://x.com/clonerobotics/status/1849181515022053845
https://x.com/Supermicro_SMCI/status/1849962896207532356
https://openai.com/index/searchgpt-prototype/
https://www.theinformation.com/articles/meta-develops-ai-search-engine-to-lessen-reliance-on-google-microsoft?rc=0g0zvw
https://blog.google/inside-google/message-ceo/alphabet-earnings-q3-2024/#full-stack-approach
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Оглавление (5 сегментов)
Segment 1 (00:00 - 05:00)
So, one of the first pieces of news that you may have missed is coming from artificial analysis. Recently, they did their latest rounds of the arena video leaderboard. You can see here that things have currently changed. China has topped the board with Minmax Hulu AI with an ELO of 1,092 and a win rate of 67%. Genmo AI's Mocky model, Mocky 1, released just last week, has already taken silver and is leading the open-source video generation model. And that puts Runway at number three, which has an ELO of 1,51 and a win rate of 61%. Now, this is pretty crazy because this kind of leaderboard is constantly changing. And I didn't include this story just to include it. There's actually a crazier story that's linked to this. Recently, Artificial Analysis tweeted, "What is Red Panda with the eyes? " And they said, "See Red Panda in the artificial analysis image arena. Link in the tweet below. " And you can see here that we do have the arena leaderboard with several different AI models that are essentially image generation models. And we can see that there is a new model at the forefront taking the world by storm. This model is titled Red Panda. But the crazy thing about this is that this isn't currently any known company. Now, we aren't sure with as to where this model comes from, but a wise tweet says that red pandas live in Asia and Chinese companies are crushing it with video generation. So his guess is that it's from a Chinese tech company like BU or Tencent. And I'd argue that is a pretty accurate guess. When we just look back, which is why I included it first at the first arena leaderboard, we could see that currently in the way of actual models that you can use today for video/extto video. It's currently China that is leading the way. And if China once again has come out with a different model for text to image that currently has a substantially higher ELO than even the recently Black Forest Labs Flux 1. 1 Pro with an arena win rate of 79% versus the other models. That's going to completely take the cake. Now, of course, I personally think that this could be an open AI model because recently, if you do remember, Sam Alman did say that whilst they aren't releasing GPC 5 in December, they do have some really cool tools that are going to be coming to all users. So, a new text to image that is photo realistic could be one of those things. Now, I personally do think that this is of course from China. They've been able to do a variety of different things whilst the US is focused on different areas and I think China are going to continue to stay ridiculously competitive whilst the market adjusts because whilst the US may have kicked off the race, China are seriously working ridiculously hard to catch up and even overtake in some areas that we have even seen. I do want to state that there is also some speculation that this could be midjourney version 7. Some rumors have stated that this version could come out recently, but I'm not entirely sure. It will be surprising to see whatever model this is, I haven't actually personally tested this, but to see image generation getting even better than it is already, I genuinely didn't think you could get that much better than Flux 1. 1 Pro. It will be interesting to see what model this is in guys who have an iPhone. Apple dropped iOS 18. 1 that actually does have Apple intelligence. Now, I did say that this was going to be coming in the next week. And of course, here we are. Now, I do want to say to you, if you don't have an iPhone 15 or above, unfortunately, you won't actually be able to get the best of Apple Intelligence. But I wanted to include this in the video so that you guys don't miss out on any of the features that you might want to use. If you're currently on your iPhone right now, it' be wise to update to the latest version of the software so that you can access the latest and greatest AI features. Let me know how it is down in the comment section below. I don't actually have an iPhone, so I'd love to know what you guys are experiencing with this kind of model. As far as I know, there are some gimmicky features, but overall, I think this is widely useful. Now, we also had Elon Musk in an interview talk about AI. It's always refreshing to hear Elon Musk give his opinions and his statements on AI. And he said that AI is improving at the rate of at least 10x per year and will able be able to do you know anything a human can do in a year or two and equal to the intelligence of all humans combined 3
Segment 2 (05:00 - 10:00)
years after that. Now I do want to preface this by stating that Elon Musk does have not the best track record when it comes to certain predictions but other predictions I would say are definitely accurate. And what I'm talking about is the fact that when we look at a few delays in the past, such as thing like the Tesla Roadster and of course things like AI being able to perform autonomous driving, there have been a few significant delays. But I do think that when we take the wider community into account, we can say that AI is certainly developing more rapidly than we have thought. It was only two years ago that we had GPT 3. 5. And honestly, I've been using 01 a little bit more and I'm starting to understand why this thing is a big deal. Last in March at the Abundance Summit, your prediction was that AI was increasing at a rate as fast as a 100 times per year and that by 2029 or 2030, we might see AI as capable as 8 billion humans. Are you still seeing that pace? Yeah, I mean it depends. It's difficult. It's a difficult thing to quantify exactly but um I certainly feel comfortable saying that it's getting 10 times better per year which is you know let's say it's you know four years from now that would mean 10,000 times better. So maybe 100,000. Yeah. And it's it's it I think it will be able to do anything that any human can do. Um possibly within the next uh year or two. And then uh then what can how long how much longer than that does it take to be able to do what all humans can do combined? I think not long. probably only I don't know 3 years from that point. So it like 20 2029ish 28 something like that. So 2029 according to Ray Curtzwell is actually a conservative date. In a recent interview with Peter Diamantis he actually said that 2029 is pretty reasonable all things considered which means that we could get this around 1 to two years earlier. And there were even predictions from Anthropic CEO that actually stated that we could get powerful AI as early as 2026, which means that look 25 is just around the corner and 26, okay, 2026 would be just after that. And imagine AGI or advanced AI arriving at that rate. That would be just absolutely incredible. from GPT3. 5 to advanced AI in a short space of three to four years. Now, Ray Kurszwell also states here that we will soon be able to back up our brains to ensure our consciousness survives in the event of accidents. And if you can hold on, and I'm not sure I entirely agree with this, that if you can hold on, and of course I'm probably a complete compared to someone as innovative as Ray Kurszswwell, but he says that if you can hold on for another 5 to 10 years, it's going to be very hard to imagine how you could die. I should be able to back up our brain, back up our heart, and it's going to be very hard to imagine how you could die. Uh, and people don't really want to die. You ask people, well, do you want to live to 120? And people are negative about that because they think of people that they've met, they haven't met anybody 120, but they've met 95, 100 year olds, they don't want to be like that, but we won't be like that. And people actually, and people say, well, I don't want to live past 95. But when they get to be 95 and if they have sound mind and body and they say, do you want to die tomorrow? The answer is no. Unless they're in horrible pain and obviously we want to avoid that as well. We got all these different organs. If one of them doesn't function quite correctly and it's not your choice, uh you could die. Mhm. Um but if you I mean I really believe if you can hold on for five maybe 10 years uh we can replace all of these problems that uh enable us to die more quickly than we want. And of course I know I said Elon Musk's timelines are a bit strange but Ray Kerszwell also believes that AGI will be achieved in 2029. Just those of you who are thinking about any specific dates, 2029 is the deadline of this decade is conservative. It could happen faster. I see no reason to increase my estimate. I said 2029 in 1999. Yeah. 30 years. Uh and people like Jeffrey Hinton was there. Uh Stanford actually organized a
Segment 3 (10:00 - 15:00)
conference. Several hundred people came. uh and the consensus was uh 80% thought it would be 100 years including Jeffrey Hinton and he's saying that he was wrong that's actually more like 30 years like what Kerszswall is saying in 1999 but no one was saying 30 years at that time I think saying by the end of the of this decade is conservative it could happen faster now if you want to talk more Elon Musk he's actually in talks to raise funding for X. AI valuing it at $40 billion. That's actually pretty incredible considering that Anthropic is also nearing this level. I mean, this valuation would make some people believe that AI is currently in a bubble, but I don't think so. I think that this is a testament to Elon Musk's brand. I think this $40 billion valuation, if it is true, means that Elon Musk has an incredible brand, an incredible track record, and people are banking on his ability to perform in the future. After all, he did help create OpenAI in the first place. You can see it says he's in talks with investors for a funding round that would value it around 40 billion, according to people familiar with the matter, escalating the tech industry's race to build advanced generative AI technology. And the startup was last valued at around $24 billion just a few months ago when it raised $6 billion in the spring and it hopes to raise several billion dollars in the new funding round. One of the knowledgeable people said and the cash raised would be added to the $40 billion valuations. Now, of course, this is just valuations. It's just on how much of the company you're giving away for a portion of a certain value. Okay. Now, the crazy thing about this, of course, is that he says, "If you're training a Frontier model, you need a massive amount of compute. " And that is really true because we actually just saw the most insane video giving us an inside tour around X. AI's largest supercomput. Right now, they have the largest guys, largest supercomputer on the world. Bigger than OpenAI, bigger than I think a lot of other companies. And this was something that was built relatively quickly. And Elon Musk is going to pour even more compute into this super computer because he wants to have the largest compute factory in the world. Of course, like I said before, it's going to be the compute poor versus the compute rich. And if you're not one of the companies that's able to get the funding to be able to have all of these GPUs, you're going to struggle when training your own models. So for those of you thinking why on earth is Elon Musk having such a huge valuation, this is the reason why these chips are not cheap at all. It costs billions of billion dollars, not just in the chips, but of course for these data centers in order to have these liquid cooling racks to cool down the amount of heat that comes from these GPUs and of course the energy needed to supply all of these data centers. It's pretty incredible, but it's something that of course is a big players game. Now, XAI and Elon Musk aren't the only companies that are quite likely to release their own models and train more Frontier models. It says that Meta is likely to release Llama 4 early next year, pushing towards autonomous machine intelligence. It says that the big question is, can Meta bring Llama's reasoning closer to the likes of GPT40 and01? The VP of AI at Meta told Analytics India magazine that the team is exploring ways for llama models to not only plan but evaluate decisions in real time and adjust when conditions change. This iterative approach using techniques like chain of thought supports Meta's vision of achieving autonomous machine intelligence that can effectively combine perception, reasoning, and planning. And this is pretty crazy. I didn't know that this was such a thing, but Meta are clearly not just focused on the open-source area. Most people would argue that, you know, Meta is a large open- source leader, but it's clear that they're planning their own unique models like the chain of thought reasoning that OpenAI's01 has and of course this agentic capability that is able to change its decisions and evaluate decisions in real time when conditions change. Now it says here that Meta's AI chief Yan Lun believes that the autonomous machine intelligence also known as friend in French systems can truly help people in their daily lives and this might be an alternative term for AGI or ASI when OpenAI is so obsessed with achieving or most likely has already achieved internally by now. Now this is a crazy statement. Llama 4 in early 2024 and Meta planning their
Segment 4 (15:00 - 20:00)
own planning model. But this is not all that Meta are planning. And when I was just talking about Meta's AI, this is where Yan Lakhan actually gives his definition of Meta's secret code name for AGI at their lab. Systems that have an intelligence that resembles human intelligence so that we can interact with them easily. And that means we need machines that understand the world, understand how the world works, can reme remember things and can reason and plan. Okay, so those are the theata for what we called AMI, advanced machine intelligence. This is our internal code name at META for what other people call AGI. I don't like the term because human intelligence is not general at all. It's very specialized. So designating human level intelligence with general intelligence, I think is a mistake. So we prefer AMI and we pronounce it AMI, which in French means friend. I think it's appropriate. So systems that um understand how the world works basically construct mental models of the world that allow them to understand the world but also plan and reason. System that have persistent memory systems that can plan action sequences so as to fulfill an objective. Systems that can reason invent new solutions to unseen problems one shot without having to be trained and systems that are controllable and safe. Meta are actually developing their own AI search engine to lessen the reliance on Google and Microsoft. As Meta Platforms tries to keep up with OpenAI on developing AI, the Facebook owner is working on a search engine that crawls the web to provide conversational answers about current events to people using its Meta AI chatbot. And in doing so, Meta hopes to lower reliance on Google search and Microsoft Bing, which currently provide information about news, sports, and stocks to people using Meta AI. according to a person who has spoken with the search engine team and this is basically where the CEO Mark Zuckerberg is going to reduce Meta's need for other major technology providers and he has been stung by Meta's dependence on another big firm Apple which several years ago made it harder to generate revenue through its iPhone apps. Yeah, iPhone are notorious for just completely siphoning off any revenue that comes through the app store and they are completely insane when it comes to that stuff. I'm not going to talk about it too much, but trust me guys, it is absolutely incredible. And apparently Meta has been working on this web crawling stuff for around 8 months. So, I'm not sure when this kind of tool is going to be released, but it will be interesting to see what kind of technology Meta has been working. Now, for next year, Meta aren't the only company that are working on certain things. You can see that apparently for 2025, Project Astra is going to be ready. It says that and they're building out experiences where AI can see and reason about the world around you. And Project Astra is a glimpse of that future. We're working to ship experience like this as early as 2025. So, that means that the infamous Project Astro, which I'm pretty sure you guys have seen by now. I've included it in many videos, but if you guys want a quick reminder, it's basically Google's vision for the future of advanced intelligence. You know how currently you talk to an AI, you converse with an AI, and then you'll get your response. It's basically advanced voice mode, but for Google, and it has a camera. So, you're basically going on FaceTime with an AI that can identify things in your environment, can tell you different things about your environment, and not only that, it has the memory of remembering where certain things are in your environment. So, if you're bored, you're home alone, you want to do some kinds of, I don't know, software development, brainstorming, maybe you've lost something around your house, tidying up, want inspiration, this is going to be a complete gamecher. I think that this kind of technology is going to be absolutely incredible. My only question for Google is that OpenAI has something remarkably similar. So, are you going to be able to get this technology out before OpenAI can bring theirs to market? One of the things that are imperative in the AI space is of course speed. So, it will be interesting to see how quickly Google managed to deploy this. And interestingly enough, it was reported that more than a quarter of all new code at Google is generated by AI and then reviewed and accepted by engineers. And this helps our engineers do more and move faster. So for that slow iterative feedback loop to where models are going to be helping improve the future iteration of cycles, I guess that has already been happening. We also got this research paper that is going to help any of you guys that are into prompt engineering and it says a theoretical understanding of chain of thought coherent reasoning and error aware demonstration. Basically with standard chain of thought which is on the left hand side. What we currently do
Segment 5 (20:00 - 24:00)
is we currently essentially have a prompt and we say okay do this step by step use this reasoning process. But what we don't do is we don't show the models where they could make a mistake and the kinds of mistakes that they probably are going to make. So you can see right here that this is the standard chain of thought prompting and the kinds of mistakes that we do make. And then you can see that the chain of thought with correct and incorrect paths actually result in higher performance. This is where you show the AI not only what to do, but in this one you're showing the AI what not to do. And this enhances the capabilities of large language models because it shows them the incorrect reasoning steps, meaning that they're less likely to do this and you're going to get better results. So, if I were you and you're struggling with a certain task, try including something that shows the AI model, hey, don't do this. This is the wrong solution based on these reasoning steps. And you might just find that AI is able to complete your task successfully. I know prompt engineering seems like such a fad, but trust me guys, there have been so many times that I've been working on a problem and I've thought, "Okay, let me try and prompt it this way. " And I'm like, "Wow, the model isn't just dumb, it just doesn't understand the question in the way that I've managed to prompt it. " And when you finally realize and get the output that you're looking for, it's like picking a lock and you've figured out the secret combination to the safe cuz now you can use that in many other scenarios. Now, I've spoken about this before, but how much would you value artificial super intelligence at? And the Soft Bank CEO Masayoshi son says that artificial super intelligence that is 10,000 times smarter than humans will arrive by 2035. Not saying that is impossible, but that is a crazy date when we think about the sheer capabilities. My definition is same level of as a human brain. That's a AGI artificial general intelligence. But people have a different point of view definition of artificial super intelligence. How super? No. 10 times super or 100 times super. My definition of ASI is 10,000 times super smarter than human brain. That's my definition of ASI. And that's coming in 2035. 10 years. 10 years from today, 10,000 times smarter. That's my Now, I want to bring you guys back in time real quick. Do you remember that Sam Alman previously said that he wants $7 trillion to build AI chips? Well, this valuation is kind of looking a little bit small after what the CEO of SoftBank just said. He said that artificial super intelligence will require 400 gawatts of AI data centers, 200 million chips and 9 trillion dollars of capital. I'm sorry, but that is absolutely insane. And I remember people were laughing Samman out the door basically when he announced this, but it's clear that as you start to look forward in time that this prediction slash requirement of investment wasn't as ridiculous as it sounded. investment required to take okay 10,000 smarter than mankind how many gigawatts we were talking about gigawatt just before me I have predicted it would take 400 gaw of AI data centers power that's bigger than total US electricity and it will require $200 million chip okay the cumulative capex $9 trillion How do you recoup? It's too much investment for many people's view. I say it's still very reasonable capex. $9 trillion is not too big, maybe too small. Now, in terms of robotics for this week, we had the clone robotics startup announce their Moscow skeletal torso, which actually gives us an insight with as to what the future of really advanced humanoids are going to look like. This is something that seems just I mean I I'm honestly speechless because I don't even have the words to describe what I'm currently looking at. But it is something that shows us that if we are going to develop robots that look like this then uh the Terminator scenario might not be as far off as we do think. I think this kind of research is kind of innovative because I don't really see many other companies pursuing this kind of humanoid or even torso when it comes to robotics. But I think it is fascinating for the ecosystem because it shows us different perspectives on the same final output. So, if you guys enjoyed the video, I will see you in the next