Elon Musk Reveals the Future of AI - XAI Full Reveal (Supercut)
43:02

Elon Musk Reveals the Future of AI - XAI Full Reveal (Supercut)

TheAIGRID 11.02.2026 63 078 просмотров 1 117 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Wan to learn even more AI https://www.youtube.com/@TheAIGRIDAcademy Links From Todays Video: https://x.com/xai/status/2021667200885829667 0:00 Elon Masterplan 02:17 GPU Breakthrough 04:29 New Structure 06:20 AI Productivity 08:21 Coding Takeoff 12:14 Imagine Dominance 16:42 Digital Companies 23:31 Training At Scale 29:06 Supercomputer Build 33:13 X App Surge 36:05 XChat Upgrade 37:23 X Money Launch 39:00 Beyond Earth 41:16 Orbital Data 41:35 Moon Factories Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (15 сегментов)

Elon Masterplan

So Elon Musk just revealed XAI's master plan and it goes way beyond chatbots. We are talking orbital data centers, moon factories, and compute infrastructure that could literally harness a fraction of the sun's energy. XAI is only 2 and 1/2 years old, but they're already topping leaderboards and moving faster than anyone in AI. In this video, you'll see the entire strategy, what they're building now, what's coming in the months, and the truly sci-fi stuff they're planning for the space. Let's dive in. We're going to start off by recapping the incredible progress that the XCI team has made in uh just two and a half years. Uh it's really remarkable uh in pursuit of our goal of understanding the universe. So just going over our accomplishments since inception. Uh it's important for to bear in mind that uh Xi is only two and a half years old, basically a toddler. Uh and we've nonetheless achieved an incredible amount in a very short period of time. So our competitors are uh five 10 some cases 20 years old. Uh they have much larger teams. They started off with far more resources and yet nonetheless we have achieved number one in many arenas uh in just a few years. So uh we we've achieved number one in voice uh in image and video generation. I think we now at this point are actually generating more uh images and video based on the last numbers I saw than all of our competitors combined. Um we uh are winning in terms of forecasting which is one of the key metrics of intelligence. Uh so our the Grock uh 420 forecasting model beat all the other AIS in forecasting. Um and we've talked many leaderboards. Uh we've got uh now a great uh app with the with the imagine with the core Grock. We've made radical improvements to the X app. And we've launched a Graipedia which is on its way to far exceeding Wikipedia and ultimately be orders of magnitude more comprehensive and more accurate and have more information as well as video and and image data that simply isn't there on Wikipedia. So it's uh it's

GPU Breakthrough

intended ultimately to be encyclopedia Galactica, a distillation of all knowledge of uh yeah all knowledge. Um and uh we we've we're uh we're the first to achieve 100,000 uh H100 GPU training cluster and we're now about to achieve the first 100 I should say 1 million H100 GPU equivalents in training. So really an incredible amount of work in a very short period of time. And it's important to consider for competitiveness of any technology company what matters is not the position at any point in time but what is your velocity and acceleration and if you're moving faster than anyone else uh in any given technology arena you will be the leader and XAI is moving faster than any other company no one's even close so let's go to our team uh as we uh grow as a company, a natural thing that happens is you reorganize the company as it scales up. So when you first have a startup, you might have just a few dozen uh people and they all just chat amongst themselves. As you grow to several hundred people, you have to then add more structure. Just like an organism that grows from a single like we all just grew from a single cell um and then to a blob of cells and then you get or organ differentiation, limbs, you grow a tail. of the pet tail disappears and then uh you become a baby. You go through these stages and so we're uh organizing because we've reached a certain scale we're organizing the company uh to be more effective at this scale. Um now naturally when this happens there there's some people who are better suited for the early stages of a company um and less suited for the later stages and so uh and for the people that have departed I'd just like to say thank you for contribution uh thank you for getting us this far and we wish you very well in your future endeavors. So now going on

New Structure

to uh the new structure of the company. Uh the company is organized uh in four main application areas. There's Grock main and voice which is really the the main Grock model. That's why it's called Grock main. Um then there's a coding specific model. There's an image and video model which is imagine and then macro hard which is intended to do full digital emulation of entire companies. and then we've got uh the infrastructure layers. So I'd like to invite members of the teams to come up and talk about each of their areas. — Hey, thanks Elon. So Grock main and voice are going to be merged into one team and you know on voice one anecdote is September 2024 open had this product you could talk to advanced voice mode and we had nothing no model uh of course in the product. uh we started much after that and in a span of few months 6 months we developed the model in-house from scratch without a bunch of people who knew audio and had a product that was surpassing open end 6 months fast forward six more months and now we have grock in more than 2 million Teslas we have a gro voice agent API you can do all kinds of amazing things in a span of one year we went from nothing to being leaders that kind of stuff is only possible in a place like XCI where you have small teams committed mission focused lots of compute. Um, and we really want to keep pushing. Same story on the chat models. You know, we've always been at the forefront of uh reasoning uh starting from Grock 1. 5, Grock 2, Grock 3. And we want to really move to a world where it's no longer about just question answering. We want to build an everything app. So, you should be able to come to it and really get done whatever you want. You know, ask a legal question, uh, make a slide deck or or, you know, solve a puzzle, stuff like that.

AI Productivity

— Yeah. So I really think on the product side we're really going to see a huge transformation happening in a very short period of time. Uh we're going to see work the magnitude of amount of work that all uh knowledge workers are going to be able to produce increase 10fold in the next short period of a few months. Uh the models that we are building out are incredibly uh amazing and we have a lot on the way and we're really excited to share that with you all. Um, and on a product side, the goal is to just build that portal that allows you to accomplish all of your work. And how do we amplify everyone to achieve much more than what they can accomplish alone? And we're building that out and it's going to be an incredibly easy to use experience that just works seamlessly. — Uh, that being said, we are hiring and we're looking for intelligent and smart people. This is not an easy place to work, guys. like this is it's a grind but we have I guess like interstellar ambitions so it's not going to be easy right so I will say having come to XAI uh it has been an opportunity of a lifetime to work among really smart and really passionate people the vibes here are amazing and it's truly an environment where if you're a smart person you want to get done you can get done there isn't like organizational overhead getting your way or kind of I don't know like having to write docs and all this kind of stuff you just do stuff at least for me I just you know just you can do things here and that's amazing and I invite more people to come here and just do awesome things. — Yeah. So with the Grock main the sort of main um foundation model the intent is that it's genuinely useful in a wide range of areas. So if you're doing uh engineering or law uh or trying or medicine any anything uh it is useful uh to you in in your job uh that's uh essential to understanding the universe and making things as useful as possible like where when Grock gives

Coding Takeoff

an answer that you can count on it. — Hey everybody, I'm MRO. So the world changed a lot recently in terms of coding. Um the coding models I was always complaining people were trying to convince me to use a coding model and I was like trusting it and I wasn't really convinced but as of recently the models they actually produce good decent quality code. I mean you still need to review and give feedback but you can it's easy to see how they can accelerate you quite a lot. So it's not only about coding it's like they understand your intuition like much better than before. Like now when you when I describe a problem, I only have to phrase it like I would to an other colleague engineer who has already seen the codebase. That's a huge change. Before you kind of need to handhold a toddler to make a change and they don't only write your code, but they also can debug your code. So now we have I do like well we do like hours of grog code running continuously to make sure that a more complex uh change to the training system actually works in production. So it's easy to see for us that this is not only about accelerating us ourselves writing code and making us 10x more productive but we're really on this path for recursive self-improvement where the current generation of gro code is training the next generation of croc code and we see that this path we exponential takeoff here this path will continue. So we are doubling down on coding and making coding one of the highest priority efforts in the company. So if you're out there and you're excited about coding and you're either very good at training modeling or you're a really good low-level software engineer interesting in systems design, this is the place to work. Like we have a million H100 equivalents uh to train the best coding metal in the world right now. So please join us. — Uh yeah, I'm God. Uh I work pair with macro on coding. So it become more and more obvious to us like you know over time like we are on a path to singularity at least on coding. So we decided like you know have our best engineer in the company micro to lead the coding and we'll build the best coding model for everyone to empower everyone to build and for me like the main like limiting factor is probably computer and energy where they can run the best model to support everyone to empower everyone and with spec now we are one team and we will win on the compute and we are winning with space compute um and also like for every engineer Right? So if you are like writing kernel, if you're writing compiler, just think about like what is still worth it. Maybe you should join us, you know, for coding effort to automate yourself a little bit like to speed it yourself up. Um yeah, I think it's like really amazing year. Um basically what a year to be alive and I can already feel the AGI feel the AI at least for coding. Yeah. — Yeah. I think actually things will move maybe even by the end of this year to where you don't even bother do doing coding the AI just creates the binary directly um and the AI can create a much more efficient binary than uh can be done by any compiler. So just say create optimized binary for this particular outcome and uh and you actually bypass even traditional coding. there's no that that's an intermediate step that actually uh will not be needed probably by I'd say the end of this year. Um and uh we do expect uh Gro code to be state-of-the-art in 2 to 3 months. So it's happening very quickly. — Yeah. Um yeah meanwhile also do imagineing so

Imagine Dominance

you know I mean what you do right after post AGI right you probably do like digital life so that's what we are doing here as well and we have the imagine team like started pretty much from scratch like six months ago we have a few people we decided we'll to do the imaging we do the video gen like yeah look at what we achieved today like you know like two weeks ago we released the imagine v1 we actually topped the leaderboard across like many of them and people loves our product, love our model and we have many more releases actually this month and next months. So yeah, to me there's like really high chance like we actually may build a metaverse before meta. Um yeah, I will also pass to try to talk about like you know the metrics we have the product. Yeah. — Yeah. Like Gorang said it's only been 6 months since we started working on imagine. We had no code internally for diffusion at all uh 6 months ago and uh basically now we've launched imagine on every product surface that we have including seamlessly integrating into X. So you can open the X app right now you can long press on any image you can edit the image you can make a video out of the image. Uh we also ran a contest recently where we had some really funny submissions that I'm sure many of you have seen. Uh so imagine is growing extremely fast. Uh and it's because of the speed at which we iterate. Basically we do multiple product updates every day. We do model updates every other week and effectively what this has led to is now users are generating close to 50 million videos every day using imagine. Uh and just to reiterate what Elon said earlier that to the best of our knowledge that is more than every other provider combined which again is an astonishing place to be uh compared to where we were 6 months ago. Uh we are also generating 6 billion images uh in the last 30 days. uh nano banana you know Google recently posted that you know 1 billion images were generated using nano banana in 30 days so you know we're six times that right uh and really the goal is uh it's not like we don't just want to win we want to win like s like over a long period of time and have sustained greatness and so uh the goal with imagine is to take anything that you can you know imagine and turn it into reality and so that's what we're going to you know that we're going to speedrun that basically is the goal yeah — hey uh I'm Hen. Uh we as we uh keep scaling our model capabilities, building visual worlds that's indistinguishable from reality, we're also uh building system that unlocks much more possibility than what we have right now. Um they will be able to generate the videos that's much longer than what we have right now with stories or with souls uh of your imagine. And uh by the end of the year, we likely will be having models that allow you to generate videos of 10 minutes or 20 minutes in one shot without any uh intervention. You just need to give your imagination and our model, our agents will do it for you. And moreover uh those are the videos we generate and we're also going to allow rendering those. We we're already the fastest in generating the videos and we're going to keep pushing the extreme where we're going to render those videos in real time and you will be able to imagine build and interact with your own world and the world will respond to you in real time and it is exciting future that we are going to build with ourself. — Absolutely. Uh I my prediction is that uh most of AI compute is going to be uh real- time video understanding and real-time video generation and um and we expect to be the leaders in that. It's worth emphasizing these points that um you know 6 months ago we didn't even have we had basically nothing in in or very weak in video and image generation and editing and went in six months to number one spot. um and in fact generating more videos and images than everyone else combined. We're going to do the same thing with coding and macro hard. And uh and I think people will be pretty impressed with the Gro 4. 2 model that's coming out. Uh that's uh it's a significant improvement. Um and that's really just that that's the small version of our new model. So uh we'll have a medium and a large version that are even more intelligent. All right. Hi everyone. I'm Toby and I

Digital Companies

work on MicroHart, the most serious of all product names. Uh so arguably giving computers to uh humans was a good idea. So we're doing the same thing for AI. It's kind of like inception. We're giving computers to computers. Um so Macro is building a fully capable digital realtime very important human emulator. So it's able to do anything on a computer that a human is able to do including using advanced tools in engineering and medicine. So there should be rocket engines fully designed by AI. And in a sense it's one of the last few remaining areas where AI is significantly worse than humans which is why I think it's one of the most exciting areas to actually innovate in and actually change the field. — Hi everyone. So yeah, my name is John and uh yeah, so we're building these strong reasoning models which are now going to control our CLI. Like we're actively using these every day. They are like tremendous like productivity boost to the whole team. I know the voice team is like killing it on that. And you know this is the reason why we need the compute you know we need the large scale compute to run these models to boost our own productivity. But um you know 80 to 90 95% of the world uh world software has a GUI. Um so that's like you know great representation and you know to truly make people's lives easier uh we need to develop models that are capable of solving day-to-day tasks uh on GUI. So macro hard you know we will emulate a company where the output is digital and so this is the obvious next step for agents. uh macro hard will enable true endto-end orchestration across the desktop and it will lead to immense economic prosperity. Um so yeah, we're entering an era where we need to tackle the hardest of tech problems, but in order to solve this, we need to hire the best people. So, you know, think of the smartest people that you've worked with and put them forward for a position here. And if you can't think of anybody like go through your phone book, go for your LinkedIn. Uh you'll be surprised like how big your actual network is. And they just need three properties obviously that we want to optimize for. Are they clever? Can they solve hard problems? And the second property is are they driven? Do they have the ambition? Do they want to win? And the third is are they a nice person? — Like do you want to actually work with them? Um yeah so thank you. — Yeah the mac the macro hard project is um over time actually will uh probably be our most important uh project because uh what we're talking about is um emulation of entire human companies. So when you look at the most valuable companies in the world they are uh their output is digital. Um so they don't actually uh make hardware. So it should be possible to completely emulate uh any company that where the output is digital. Um and this will usher in an age of prosperity likes which uh we could barely imagine at this point. You need imagine to imagine it. Um so this is a big deal and this is why the words macro hard are painted on the roof of the training cluster. uh because that's what it's going to bolt. So, — it's also pretty funny. — Yeah, meant to be a joke. — It's me again. You might remember me from macro harding computer use from a long time ago, but I also actually work on core product infrastructure and API. In fact, this is what I've done for most time uh at XAI. So, anytime you use any of our products like grock. com API authentication, you go to status. x. ai. This is done by the core product infra team and uh a large portion of them actually sit in London and we work with Haime over there. So we keep the lights on at peak hour 4 p. m. every day. Uh we get paged at night when stuff goes down. Also thank you to anyone in Palto getting paged. Um there's really important work reliability security core product infrastructure. So if you actually like if you're really interested in solving difficult distributed problems with like messy data, this is the team to join. Hey everyone, my name is Diego. Um yeah, so I think one of the main bottlenecks in this next year uh for these models is going to be very high quality evals and training data. And uh one of the ways we solve that is by taking the world's foremost experts in these respective domains u bringing them here and um uh having them evaluate the model. Uh we do this for domains like medicine, finance, uh law, we have voice actors, we have video editors um who contribute daily um to making rock better. Um and uh yeah, we're going to be continuing to work on very high quality evals over the next few months. Um we have some exciting stuff in um you know, the frontier of uh useful tasks in finance and law. um you know we're trying to build evals that are are useful and training data that represents uh useful work um and not necessarily proxies of intelligence I think a lot of the open source eval do today. Um — yeah. I' I'd like to say like um we're shifting from using these sort of uh common uh internet evals which I think are actually not a real indicator of usefulness to uh having uh expert tutors in each domain. Um so every domain of uh engineering, medicine, law, whatever the case may be. Um and the the actual eval is does the expert in that arena or does our group of experts in that arena human experts uh agree that uh Grock is extremely useful um and that the results are correct? That's the that's actually the only eval that really matters. — Yeah, exactly. Um in you you'll see this in Gro 420. Um but we've made some improvements because of that type of data in truth seeeking and kind of minimizing political bias. The responses are much more cogent. Um, so yeah, that that's exciting. Uh, and we are also working on Gracipedia. So the, uh, the goal of Rockedia is to create a distillation of all human knowledge. Um, I kind of like to think of this as like a modern day version of the Library of Alexandria. Um, and in the quest to build Encyclopedia Galactica, um, which it will one day be called. Uh, we've gone from essentially having nothing to around 6 million u, um, articles. Um, for context, Wikipedia is around 7 million English articles. Uh, and, uh, yeah, we're improving on hallucination. Um uh and our our goal is essentially for rock 5 to not have to search out of the data center. Um so yeah so in the ML

Training At Scale

infra team uh we are building the training inference and tooling team tooling software for the company. So to give you an example when we were training Grock 3 we built the pre-training framework for this and it these are some of the coolest system in my opinion that you can build as a software engineer. So, it's like we have 100k H100s at the time and they were just delivered and we didn't quite have the software. We thought we'd have the software, but then at 30K scale we realized actually the software is not quite working and it took a major almost I would say halfway rewrite of the software because there's so much going on in a data center that you can't actually account for. Um, switches are flapping, links are flapping, switches are going down, GPUs are just burning through, you have numeric issues. And it's a system where you want really 100k H100s to behave in log step. So a training step is like 5 seconds and you're going 5 seconds in log, but during that 5 seconds, everything can happen. So you need to write a system that makes progress despite all these things that can happen in the environment. And we did this successfully and it was one of the coolest times in my life where the system was actually running at it was running at the same time my son was born. So that was extra excitement. Um but these problems like you don't find anywhere else like nobody has this kind of compute and also talent density. So at the time to give you a perspective we were like in overall team in pre-training we were probably like 15 people out of that maybe like seven people were working on the actual training system and we still maintain that talent density in the team. So if you're interested in working on these problems and you don't want to be just like part of a bigger organization where you're one of like a thousand people working on this then this is the place like we are still a very small team with me is Leon Min from the RL and inference team. Yeah. Hi, I'm Lemi. So, at our team, we run a reinforcement learning training job and production inference at large scale on the earth and probably soon in space. Uh, and we are kind of already uh design a lot of thing uh to make it more resilient and scalable. So, we're building a system to scale from 100k chips to millions of chips. And uh we optimize every aspect of the stack like parallelism, prefill, decode and make resilient to every known and unknown hardware failure. Um so if you are system hackers obsessed with extreme performance and reliability. So here is you'll find uh the most interesting problems to work with and I think actually like very similar to all kind of thing like you it's very important for you to first see the problem and then you will develop the solution that no one else can develop before. Yeah. Okay. I'll hand over to the tooling team. — Hello. Uh I'm Ashib from the tooling team. uh every software uh needs to have a great interface to be able to make it useful. So as the tooling team we are responsible for building the platforms, frameworks and infrastructure which is required for humans as well as agents to be able to use our products. Uh we started by building out the human data platform. This is a place where we collect all of our human data and eventually expanded on to build our internal engineering platform through which we basically run deployments uh run evaluations or like look at like what training results exist. So if you really care about building a good interface or providing a really useful framework for researchers for agents as well as our tutors then you should definitely join our team. So hi everyone I'm Yulom from the Jax team. So now Jax at XAI is a really small team with a couple of engineers that working on Jax GPU to optimize our uh ultra large scale GPU training. So you can imagine that training at scale can be very complicated. Even you run hello world at scale it can be complicated right. So uh then we are actually um responsible for supporting the entire companies from uh pre-training foundation models RLS and also multimodel to scale um things um to from uh first from 10k 100k then probably 1 million h100 equivalent uh GPU scale and uh we to implement a lot of you know pract practical optimizations we have to um uh customize the entire Jacks stack from compiler and runtimes and there will be a lot of interesting problems. Um and also if you really want to uh you know u obsessed on optimizing uh the entire stack at scale that we are probably the best place to go uh because you know we really have very large scale uh GPU clusters and we have a lot of interesting problems to work with. — Hey I'm Pangjul from the kernel team. Basically the kernel team sits at the very bottom of a training and serving stack. Our code runs inside the million equivalent GPUs that we have. And if you look inside the GPU, there's hundreds of thousands of threads. And these threads are trying to talk to each other to multiply matrices, compute attention scores, and some of them even talk to the million other GPUs that we have. And this is the low-level system that we have. And we like optimizing every single microcond in this. And we care deeply about squeezing every last drop of performance from these GPUs. So if you like this low-level systems problems algorithms, please join us.

Supercomputer Build

— Let's see. Now we'll try to bring in uh Hiner and Spencer who are actually uh at our supercomput cluster in Memphis. Hey Hiner saying our job is to keep the computer up and running to try the next model of drug and serve AI. So users so used to work well uh a lot of ingredients have to come together mainly software and hardware. So there's all these jibby CPUs, nicks, switches of hundreds of thousands of operating systems running as one dick supercomputer and um what we need is folks who really understand the really understand and really understand how computers work on a deep level. If that is you, reach out on X and I'm handing over to Dan. — So we have 300,000 GB 300 platform GPUs here today. We're still growing, still building 847 miles of fiber per data hall. 12 data halls. If you want to be part of the world's largest supercomput, come join us. — All right. — So, it's quite marvelous what we've been able to do in less than one year's time here. Um uh we have once we're completely finished, we'll have north of a gigawatt of power online and running. We'll have the largest Tesla mega pack uh system in the world, larger than Hawaii or South Australia. Um and Zach is really quickly gonna talk a little bit about actually constructing the data center. Y so behind me you can see data hall 11. So one of the most incredible things about what we're doing here at Macroharts, how fast we do it, right? So, like they were saying before, over 850 miles of fiber at every single data hall, over 27,000 GPUs um and over 200,000 connections. So, all of this that you can see behind me was put up in less than six weeks. We do that over and over again. We massively parallelize it. It's pretty much the most complex and consistent type of engineering design and construction project you can possibly imagine. — So come join us. Yes. You know the other really awesome thing about this is that everything is completely vertically integrated within this team. Um from architecture, mechanical, electrical structure, all the disciplines. Um and we also care a lot about efficiency while we're designing all of this too. So, it's not just about getting the most compute online the fastest, but also achieving the highest POE in the industry um of using as much power smoothing technology as we can and being really good partners in the community here in Memphis um with the Tesla mechaps and that we have going. You can check them out XAI Memphis. Back to you. — All right. So, that was live from the front lines in Memphis. Um so fundamental to uh any AI company's success is the compute advantage and what we've demonstrated over and over again is that XAI can actually deploy uh more AI compute faster than anyone else. Um and actually as Jensen Hong of CEO of Nvidia has said many times uh in interviews there is no one faster at getting AI compute online than XAI. So congratulations guys. Yeah, this is what it looks like. So that's uh really phase one which is uh 330,000 uh Grace Blackwells uh with macro hard written on the building that's uh not an image edit. It actually is on the roof of the building. Um and then uh macro hotter will be the building that you can see which has got the macro hotter with the rockets on it. Um and that will be another 220,000 uh GB300s. Um so all of this will be training uh the models that you uh that you experience. So the it's uh absolutely fundamental obviously to have large scale training compute in order to get the best models. Yeah, I'm sort of reminded of the Jose meme where you see uh one guy digging and there's like seven people watching. Um, and one of the big

X App Surge

differences between XAI and other companies is we are actually Jose. — All right. I'm uh Nikita. You might know me as a part-time ship poster, full-time customer support for X. Um, so uh we're now reaching over a billion people across our family of apps. Um, every time news breaks, it just becomes evident that this is the most important communication tool of our time. Uh it's where the the most influential people uh com convene. Uh it's where truth is crystallized. Everything is downstream of X. The reason they say this is going to hit Facebook in a week, it's because it happens here. Um and uh I think we're only beginning to realize its full potential. Uh we had a remarkable year for the app. Um we rolled up our sleeves and got a ton done. Uh January was our biggest month ever for the app uh in terms of engagement. Uh and then February is on track to beat that. Uh much of the credit lies with the algorithm team. Uh they've been putting in crazy hours. Uh and it's clearly paying off, but there's still a huge amount of work to be done. Um on the top of funnel side, uh first-time downloads are up over 50% uh every month. Um, and we're exhibiting right now like basically the growth rates of an early stage uh consumer product. Um, we also made a ton of headway in solving one of the like 20-year-old problems of the app, which was ramping up new users. Uh, new users are now spending 55% more time per day in the app uh than they were 6 months ago. Um, and on the core product side, we're hitting our stride, too. Um, not only did we rebuild the algorithm, we rebuilt our onboarding flows, and we're seeing double-digit increases on all our key metrics. We rebuilt notifications, our web browser, uh, exch of the app has been rebuilt to be better than ever. Um, and it's clear that if we're focused, uh, we can move mountains and evolve this platform. Uh just last month we did a little push on uh articles. Um and uh articles published are up 10x. Uh articles read are up 17x. Um and uh on all other fronts like on over the holidays uh we did a big push on subscriptions. We just crossed a billion dollars in ARR there. Um I think with the X app, you know, the there's very few unknowns like the path for us to win and become uh you know, the number one app in the world. Uh we're it's we know what to do. The ball's in our court. Uh it's for us to win and it's just a matter of us executing.

XChat Upgrade

— Yep. And uh yeah, so we've evolved the what used to be the uh old Twitter DM stack, which was unencrypted, basically just text, uh to a fully encrypted uh messaging system that it allows you to do uh audio and video calls. Um has uh you know, all the things you'd want from any messaging app. The disappearing me messages, screen screenshot blocks, like there's a whole all the features that you'd want in an app. Um we and uh we will be open sourcing the code for this in the next few months as we are open sourcing the recommendation algorithm code so people can actually see what we're doing. Uh nothing beats uh transparency for believing in a company. So we're the going to be the only recommendation algorithm uh that actually open sources. So you can see what it does and how it's evolving. Um with Gro Chat it will also be uh open source so you can actually see if there are any vulnerabilities. There will be no hooks for advertising or anything else like that in uh in Groch chat which is really intended to be a generalized communication system. Um and in the next few months we'll be releasing a standalone uh X chat app. So if you

X Money Launch

just want to do messaging you can just you can do that. You don't have to go to the core product. Um, and it will have desk uh desktop sharing um and u multi-user so you can do video calls with lots of people. It's really intended to be a fully functional um communication system uh with Xhat. Uh for X Money we're uh we've actually had XMoney uh live in closed beta within the company. Um and we expect in the next uh month or two uh to go to uh a limited uh external beta and then to go worldwide to uh all X users and this is really intended to be the place where all the money is the central source of all uh monetary transactions. So it's a it's really going to be a gamecher. Um and uh the reason we say 1 billion users is actually over a billion users. Is that uh while our monthly users are on average around 600 million, the uh number of people who have the X app installed is well over a billion. It's just that most people only occasionally come to the X app when there's some major world event. But as we give people more reasons to use the X app, whether it's for communications uh for uh Grock or for uh X money, whatever the case may be, uh we want it to be such that if you wanted to, you could live your life on the X app. And as we make it more and more useful, uh we'll obviously give people reasons, compelling reasons to use the app every day. Um and have my expectation is well

Beyond Earth

over a billion daily active users. Um, in order to understand the universe, you must explore the universe. There's only so much you can learn from uh from just being on Earth uh with telescopes and colliders on earth. Ultimately, you have to go out there and you have to explore the universe to understand it. And that's the motivation uh behind the combination of SpaceX and XAI. um is to accelerate humanity's future in understanding the universe and extending the light of consciousness to the stars. So in the grand scheme of things, when you look at how much energy Earth is actually using for civilization, we're only right now using call it roughly 1% of the potential energy of Earth. Um, and if we wanted to use even a millionth of the sun's energy, that would be roughly a million times more energy than civilization currently uses. The only way to access that energy, the energy of the sun, is to extend beyond Earth. Earth is really a tiny dust moat in a vast darkness. Uh, you know, the sun is 99. 8% of all mass in the solar system. So you have to expand beyond the tiny dust moat that is earth uh to make uh any significant dent in using the sun's energy. Like I said, it's you'd have to expand roughly a million times just to get to 1 millionth of our sun's energy and then going beyond that exploring ex extending to the galaxy and maybe someday even to other galaxies. So the uh the the next step beyond Earth data centers is are our Earth orbital data centers. Um and we'll be launching with SpaceX orbital data centers at the 100 to 200 gawatt per year level, not cumulative, I mean per year. And ultimately we see a path to maybe launching as much as a terowatt per year of compute from earth. But what if you

Orbital Data

want to go beyond uh a mere terawatt per year? In order to do that, you have to go to the moon. So by having factories on the moon building AI satellites and having a mass driver, which is the kind of thing you really only learn about in or read about in science fiction, but we're going to make it real. we're

Moon Factories

actually going to have a mass driver on the moon. And if you do that, you can go several orders of magnitude greater. You can go to a thousand gigawatts or more per year. Um, and ultimately get to maybe a millionth and then a thousandth and maybe even a few percent of the sun's energy. It's difficult to imagine what an intelligence of that scale would think about, but it's going to be incredibly exciting to see it happen. I really want to see the mass driver on the moon that is uh shooting AI satellites into deep space. It's going like just one after the other. Um, I can't imagine anything more epic than a mass driver on the moon and a self-sustaining city then going beyond the moon to Mars. Uh, going throughout our solar system and ultimately uh going being out there among the stars and visiting all these star systems. Maybe we'll meet aliens. Uh maybe we'll meet see some civilizations that lasted for millions of years and we'll find the remnants of ancient alien civilizations. But the only way we're going to do that is if we go out there and we explore. And this is the path to making it happen.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник