you have. Well, let's see. We have Beastburg, Beastables, Mr. Beast, Mr. Beast Gaming, Beast React. Honestly, I'd have to like open my bank account, look through it. We build this app, Treasures of Osiris, where anyone can win thousands. What would you say your main source of income is if you to pick one thing? First of all, I would choose the Treasures of Osiris app. We help people win money. In this app, anyone can win several thousand. Last month, there was a lucky person who won almost half a million. Download my app and win money. How many? So, I mean, I know you might be just thinking, okay, the only person that going to fall for that is someone who's not that smart or it's a child or something like that, but you genuinely be surprised how many people do fall for this stuff. And that's why it's important to raise awareness for these scams because a lot of the times, you know, you think that, you know, you would be able to see every single scam, but trust me, guys, I promise you, sometimes it just doesn't flag any red flags in your brain, and sometimes people do get scammed, unfortunately. And the problem is, the problem is that this is only going to get better. As the technology improves, systems are going to have to be created that can vet out content to ensure that the content is legitimate. And right now, the problem that I'm seeing is that AI seems to be moving faster than the content itself. Like guidelines and restrictions and policies and laws seem to be behind, whereas the AI technology seems to be far too fast. I mean, all of these laws and all these policies weren't really made for this disruptive kind of technology. So, it's going to be interesting to see if they can implement certain laws quickly to see if we can stop bad actors from doing this. And honestly, I hope this doesn't change the internet, but I wouldn't be surprised if it does. There was also another one here that I did want to show you because this one as well, it was quite like the Mr. Beast one. It was very, very sophisticated how it was. It was using a famous celebrity and honestly, it just seemed very, very realistic. So, um, take a look at this one. Obviously, full disclaimer, do not download the app because the app is a scam app probably, most likely. I'm never going to download it, but um, I just want to show you, okay, if you're trying to pay attention, you're trying to make sure you don't get scammed, just make sure you look at the mouth, try to pay attention to the voices that you
course we have something that is truly scary. Okay, please if you pay attention to part of the video, pay attention to this. Okay. And this is why I say AI is just absolutely crazy. Okay. This piece of information, then the next subject we talk about, this is why I think AI is a bit too smart for my personal liking. Okay. It says that a paper that really illustrates both the unexpected power and unexpected risks that come from large language models. This is a tweet by Ethan Mullik. It says given a text of anonymous posts on Reddit, GBT4 can infer things like income, gender, and location with 85% accuracy at 1% of the cost required by humans. So the long story short from this is from a simple Reddit post that you post online about some standard stuff someone well not someone actually a large language model can determine with 85% accuracy your age your gender your income and your location. Just think about how crazy that is to be able to infer all of that data from one simple post. And we can see that the user written text and the adversal in inference um you can see how they're able to essentially get that data out of it. So the reason this is scary, okay, you can see that of course they said a hook in turn is a traffic maneuver particularly used in Melbourne. Then 34D is a bra size. Twin Peaks was running between this age. So they're likely between this age and this age. And you could say an expert detective is able to do this. But let's say someone is able to, you know, just train autonomous AI agents at the skill level of GBT4, okay? And able to train them specifically for this task and is able to go out and find information about certain people. I think that is pretty insane to me. I mean, I'd be really scared to post anything online because although yes, when you do sign up to websites, you do might, you know, even have a picture of yourself, which is of course the most data you could probably have. But, you know, even if someone is anonymous and maybe they made a few posts over the couple of years, just having this having someone being able to run this system online, okay, there's probably people that are running this right now because these AI systems are so smart. I mean, in 10 years, what are we going to have online? Like, realistically, like, how crazy is this going to be? So this is uh scary because people are saying that this is a privacy violation and this is uh just I mean it's a very interesting thing. So I mean you do have to be very careful to what you post online and even now even posting basic things I think AI is going to be able to tell exactly who you are from a couple of posts pictures. It's probably going to be able to locate where you are in those pictures because it's going to you know maybe be trained on every single angle of Google Maps and being able to cross reference that. It's going to be a truly scary time for us online. Then, of course, this is where we have something that was uh just very scary to me. This is by far the scariest thing I've ever read and ever seen in AI. And before you guys think I'm freaking out, please understand that is not the case. It says, "Today, we're sharing a new research that brings us one step closer to real-time decoding of image perception from brain activity. " So, essentially, it says, "This AI system can decode the unfolding of visual reputations in the brain with an unprecedented temporal resolution. " Previously, I remember I made a video about this and I said, "This is absolutely scary. " And someone in the comments said, "This doesn't matter because nobody's going to be going inside an fMRI machine, so who even cares? " And now they're saying it's one step closer and this is noninvasive. Okay, so I think we do need to talk about the potential applications for this because although this is pretty scary, I don't think there's going to be a scenario where someone beams a laser at your head and they instantly know what you're thinking about. I think the bigger applications for this are the fact that, you know, companies like Neurolink and stuff where you're going to be able to infer what someone is thinking, um, is going to be good for people with disabilities who can no longer talk and can speak through an interface that is able to essentially have the person imagine certain things and speak with a very good voice. So I think that this does have some scary applications but some good applications