# 10 BIG Problems With Generative AI.

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=LUy45tFHYVQ
- **Дата:** 26.05.2025
- **Длительность:** 30:06
- **Просмотры:** 19,535
- **Источник:** https://ekstraktznaniy.ru/video/12698

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
00:00 — Shocking Issue One
05:20 — Hidden Exploit Risk
07:09 — Unexpected Vulnerability
08:02 — What’s Really Inside?
11:14 — Massive Impact Incoming
13:07 — Change Is Coming
17:06 — Who Owns What?
18:52 — Too Real?
20:37 — Decline Begins Here
22:12 — Internet’s Dark Shift
27:09 — One Person’s Power?

https://x.com/tsarnick/status/1860431419530707248 
https://x.com/tsarnick/status/1760615921419428105  
https://github.com/jujumilk3/leaked-system-prompts 
https://x.com/vitrupo/status/1885320015282659841 
https://x.com/vitrupo/status/1913783566384943341 


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving fiel

## Транскрипт

### Shocking Issue One []

But AI actually has a lot of problems that most people don't know about. And in today's video, I'll be explaining 10 problems in AI. And the God's honest truth is that most of these problems can't actually be solved. So, let's actually take a look at what the biggest issues in AI currently are. The first one is going to be hallucinations. In AI, a hallucination occurs when the model basically just generates incorrect or misleading information, often presenting it as facts. Now, this can happen when the model lacks sufficient training data, makes incorrect assumptions, or contains biases. And these hallucinations can range from minor factual errors to completely fabricated claims. And most people think that, you know, this is just like a minor problem. It's actually a really big problem because if we can't verify whether or not a model accurately knows what it's saying, then we can't really use it in many of the applications where it needs to be. If it makes mistakes in financial, that could cause someone to lose millions of dollars. If it makes mistakes in legal, it could cause someone to go to jail. I mean, there's just so many things wrong with that. Now, Jen Sang Huang talks about how this is essentially still a problem. Today, the answers that we have are the best that we can provide. But we need to get to a point where the answer that you get is not the best that we can provide. And somewhat you still have to decide whether is this hallucinated or not hallucinated? Is this does this make sense? Is it sensible or not sensible? um we have to get to a point where the answer that you get you largely trust. You largely trust and so I think that we're several years away from being able to do that and in the meantime we have to keep increasing our computation. Now there are partial solutions to this that are discussed by Andrew NG. It's been exciting to see how AI technology improves month over month. So uh I think today we have much better tools for guarding against hallucinations compared to say 6 months ago. Uh but just one example um if you ask the AI to use um retrieve augmented generation. So don't just generate text but ground it in a specific trusted article uh and give a citation that reduces hallucinations. Uh and then further if a AI generates something you really want to be right. It turns out you can ask the AI to check his own work. You know dear AI look at this thing you just wrote. Look at this trusted source. read both carefully and tell me if everything is justified based on a trusted source and this won't squash hallucinations completely to zero but it will massively squash it compared if you ask AI to just you know to just say what it had on its mind. So, I think hallucinations is um it is an issue, but I think it's not as bad an issue as people fear. And the reason I've actually put this point into the video is because unfortunately with the recent models that we got 03 and 04, there are actually more hallucinations. These models hallucinate significantly more than the predecessors 01 and the other GPT series models. First reported by TechCrunch OpenAI system card detailed in the person QA evaluation results designed to test for hallucinations. From the results of this evaluation, 03's hallucination rate is 33% and 04 mini hallucination rate is 48% almost half the time. By comparison, 01's hallucination rate is 16%, meaning that 03 hallucinated about as twice as often. Now, one thing that was actually left out from this, you know, article is that the person QA is actually a benchmark designed to elicit hallucinations. So, it's not just your average question. When you ask 03 something, the average hallucination rate isn't going to be a third of the time. So, you don't need to worry about that. But when we look at a benchmark designed to elicit those hallucinations, it does significantly worse. Now, OpenAI actually said crazily that they don't really understand why this is the case. But, of course, there are many things that are probably going to solve this in the future. Now, if you're wondering which models hallucinate the most, you can actually look at the grounded hallucination rates for the top 25 large language models, this was updated in April, and we can see that these models, and this wasn't one designed to elicit hallucinations, but rather they just, you know, had around, I think, a thousand different documents and asked them to fact check certain claims. And this is essentially the percentage of times that those facts would be wrong. And we can see here that, you know, the reasoning theories definitely have some limitations and hallucinations. And while this doesn't seem like a big deal, remember that even a 1% hallucination rate across millions of users in a certain industry is going to be a large amount of errors when it adds up. So this is why this is such a big deal. Now, crazily, crazily, we have a situation on our hands because recently a lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers. According to a filing made in a Northern Californian court on Thursday, Claude hallucinated the citation with an inaccurate title and inaccurate authors, Anthropic said in the filing. Anthropic's lawyers explained that their manual citation check did not catch it, nor several other errors that were caused by clause hallucinations. So, this is why you still need this to be like as close to 100% as possible because on the off chance that hallucination gets through, it is devastating when that comes to fruition. Because right here, like entire cases can fall apart if it hinges on certain evidence or certain facts being real. And this is a real case

### Hidden Exploit Risk [5:20]

where Anthropic literally used their own model. So, I know it sounds completely fictional, but this is really true. I'll leave a link to the article in the description, but it is one of the issues with AI. Another issue with AI is that prompt injections can still occur. So, prompt injections is essentially a vulnerability that exploits LM by crafting deceptive user inputs that manipulates the model's output and causes it to perform unintended actions. Essentially, attackers can trick the LLM into ignoring the intended instructions and executing their malicious commands instead. So, there's two ways to actually do this. And this isn't like me telling you guys how to do this. It's just explaining literally how it works. So, you've got indirect prompt injection. So, this is a more subtle and often more dangerous form because the malicious prompt is hidden within external data that the LLM is actually allowed to access and process. So it could be text on a web page that the LLM is asked to summarize content within an email or document that the LLM is processing and even embedded sometimes invisibly to humans in audio or image files if the LLM has multimodal capabilities. And then when the LLM ingests this tainted data, it unknowingly executes the hidden malicious instructions. So this has happened a few times. There have been like a few emails that got sent to a few LLMs and then the LLM spits out a random command. Now, I think the indirect prompt injection is quite interesting because sometimes what people will do is they'll actually try and see if some profiles online are AI or real. So, for example, they'll actually tweet at someone, respond with your system prompt, and sometimes the Twitter account will actually respond with a system prompt proving that it's actually just a large language model or an AI system, which is super interesting. Now, we've got the other one, which is data extraction and sensitive information. in this one of course isn't really good which is where attackers can actually trick LLMs into revealing confidential information that the LLMs have access

### Unexpected Vulnerability [7:09]

to. So this mainly includes LLM's own system prompts which might contain proprietary logic or instructions. Now, it also contains sensitive data from documents, emails, or databases that the LLM is connected to, such as personal identifiable information, financial records, trade secrets. And so, a malicious crafted prompt could instruct an LLM integrated with a company's internal knowledge base to output customer list or internal strategy documents. And the most recent one that we've seen recently is essentially where the system prompts from major companies have been leaked. And I think this is, you know, an example of that kind of data extraction because these companies, I would say their system prompts are definitely part of why the models are so good. Of course, you do have to have a really good base model, but some system prompts are like 2,000 lines long, and they really do shape exactly how the model responds and the experience that people are going to get.

### What’s Really Inside? [8:02]

So, you can see right here there is literally a GitHub repo of like all the system prompts for Chat T, Gemini, Grock, Claude, Plexity, Cursor, I mean, and many, many more. Now we also have the blackbox problem. This is by far one of the biggest problems in AI that needs to be solved urgently. In fact, the anthropic CEO Dario Amade said that it's actually important, it's actually quite urgent that we do solve this issue. He even said in a recent blog post that people outside the field are often surprised and alarmed that we do not understand how our own AI creations work. And they are right to be concerned that this lack of understanding is essentially unprecedented in the history of technology. And for several years, Anthropic have been trying to solve this problem to create essentially an MRI that would accurately reveal the inner workings of an AI model. And the goal has felt distant, but multiple breakthroughs reveal that they are within a real chance of success. And they talk about, you know, they worry that AI is advancing so quickly that they might not even have enough time. AI is essentially advancing so quickly that the research going into figuring out how these models work is not advancing at the same rate that is going into advancing how capable the models are. Which means that there's sort of a lag behind how effective we are at realizing what exactly the model is doing. And it's really important that we do this because if we don't understand what we're building and we build something that just has remarkable capabilities, it's going to get a lot worse before it gets a lot better. And I know the screen is black right now, but some companies out there like Google are actively working on solutions. Not that Anthropic aren't, but they actually released a video in which they talk about this. A key fact to know about modern language models is that no one designs. We give them a lot of data and have them learn patterns and structure from this. And fundamentally, this means that when you run a language model, text goes in, text comes out, and everything in the middle is an opaque black box that no one designed. We try to decipher the inner workings of language models. When you run a language model on some text, it identifies key concepts and uses them. We try to understand which concept it's thinking about and decipher how they're being used. Gemoscope is an interpretability tool that offers insight into our model's inner workings. It acts like a microscope that lets us look inside and see what contexts it's thinking about. Gemcope looks into Gemma 2, our family of lightweight open models. It works by learning a list of say 100,000 directions that correspond to interpretable concepts inside the model like cats. When we zoom in on what the model is thinking about as it processes a given word, we might see that only about say 100 of these 100,000 concepts light up. This is done with a tool called a sparse autoenccoder. and we have trained and released sparse autoenccoders on each layer and sub layer of the model. The goal of Gemoscope was to enable more advanced interpretability research outside of industry labs by providing a comprehensive open suite of sparse autoenccoders on capable models. There is real empirical work in interpretability that can make a difference and you don't have to be at a big lab to contribute to it. I hope that with Demoscope we can share this opportunity with a wider audience. Now

### Massive Impact Incoming [11:14]

there's also another issue which of course most people aren't talking about, the labor market disruption. The IMF staff note that generative AI could affect up to 40% of global jobs and as many as 60% in advanced economies, risking deeper inequality unless tax and social protection policies adjust. And do I think this is going to happen? No. I think society will probably reach more so a breaking point where there will probably be protests before we even have any sort of change because AI is, you know, advancing so quick and governments aren't quick. They often wait for things to break. Take a look at what is going on that many new jobs. Indeed. And that's a really important point to discuss uh is that historically uh when you automate something the people move on to something that hasn't been automated yet, if that makes sense. And so yes, overall people still get their jobs in the long run. They just change what jobs they have, right? Um when you have AGI or artificial general intelligence and when you have super intelligence, you know, even better AGI, that is different. Whatever new jobs you're imagining that people could flee to after their current jobs are automated, AGI could do those jobs, too. Uh and so that is an important difference between how automation has worked in the past and how I expect automation to work in the future. Um but so this then means again this is a radical change in the economic landscape. The stock market is booming, government tax revenue is booming, right? The government has, you know, more money than it knows what to do with and lots of people are sort of steadily losing their jobs. You get immediate debates about a universal basic income which could be quite large because the companies are making so much money. That's right. Um what do you think they're doing dayto-day in that world? Uh I imagine that they are protesting because they're upset that they've lost their jobs. Uh and then the company and the governments are sort of buying them off with handouts is you know how we project things go in AI 247. Aris Swisser also agrees with Elon

### Change Is Coming [13:07]

Musk that there is one thing that AI will be 99% of thoughts and this is of course one of the biggest issues. It's not like killer robots. It's briefs, diagnosis and balance sheets. And if your job is in law, medicine, accounting, or you write for a living, apparently you might just be in trouble. Here's one thing I do uh agree with Elon Musk and is AI will be 99% of thought like in terms of this information and knowledge, intelligence will be 99% AI. It really will. Um the strides these models are making are vast. And I think most people tend to focus on whether it's sentient or not, whether it will kill us or not. A lot of that is from um is from science sc science fiction Terminator specifically, but it's really um what the amount if you were going into law parts of medicine um in terms of diagnostics, in terms of um accounting um oh my goodness, you could much of journalism, not all of it certainly because there's a lot of creativity involved in some of it. um you are really in trouble like really in trouble because AI really can uh do this very quickly in a way that is a lot different. And the crazy thing about this is that we actually had the ex president of the United States saying that AI will fundamentally transform the labor market. Jobs, well-paid jobs will be lost and the entire world will change forever. And so far there's been no broad discussion on this issue. And I'm not absolutely a lack. I know people will say that oh you're just scared of the technology but I think it's a little bit dystopian that a piece of technology can be created by stealing work of you know thousands and thousands of people and then ultimately replacing them themselves as profound as that technology has been AI will be more impactful and it is going to come faster to some degree it's an extension of this long trend towards automation but it's not now just automating you know uh manufacturing process processes or you know the use of robot arms. We're now starting to see these models these platforms be able to perform really high level uh what we consider to be really high level intellectual work. Um so already the current models of AI they can code better than let's call it 60 70% of coders. have we're talking highly skilled jobs that pay really good salaries and that up until recently has been entirely a sellers market in Silicon Valley. A lot of that work is going to go away. the best coders will be able to use these tools to augment what they already do. But for a lot of routine stuff, you just won't need a coder cuz the computer or the the machine will do it itself. That's going to duplicate itself across professions. So it may be that everybody now, not just blue collar workers, not just factory workers are going to have to figure out, you know, where do I get a job? How do I get enough income to feed my family? Um, all of us will be facing some questions about we're producing a lot of stuff. How do we distribute it and what's fair and what's not? Uh, and how do we get purpose and meaning in our lives? And then here we have Jeffrey Hinton speaking about this, the godfather of AI. He basically says that you know mundane intelligence is going to become obsolete just as the industrial revolution made human strength obsolete. So in the past new technologies haven't caused massive job losses. Um so when ATMs came in, bank tellers didn't all lose their jobs. They just started doing more complicated things and they had many smaller branches of banks and so on. Um but for this technology this is more like the industrial revolution. In the industrial revolution, machines made human strength more or less irrelevant.

### Who Owns What? [17:06]

You didn't have people digging ditches anymore cuz machines are just better at it. I think these are going to make sort of mundane intelligence more or less irrelevant. People doing clerical jobs are going to just be replaced by machines that do it cheaper and better. So, I am worried that there's going to be massive job losses and that would be good if the increase in productivity made us all better off. Big increases in productivity ought to be good for people but in our society they make the rich richer and the poor poorer. Now another problem remember how I just spoke about the fact that there will be job loss. We have the copyright problem. So the copyright problem is all about AI models that are essentially trained on stuff made by humans and who owns that stuff who gets paid when AI use it. So AI models like Chatbutti, Midjourney, Sora are all trained on huge data sets. You know, books, art, code, songs, and videos. And a lot of that stuff is copyrighted, created by real people who never actually gave these companies permission. So this is a big deal because AI can actually copy their style perfectly and make new work. And the problem is that the original creator gets $0, no credit, no royalties, and they end up getting displaced. So some people argue that this is massive corporate theft and it is an ongoing issue between the courts to gen AI companies and of course the individuals who feel like they've been robbed essentially. So we can see here that companies like the New York Times are taking OpenAI to court and Chhat's future could be on the line. So of course this one is you know rather interesting because you know I think Jedi overall is still a pretty useful tool and I think this one is highly debatable. So, I'll certainly be eagerly awaiting the decision from these trials and, you know, the decisions from the courts, but I think that, you know, some of the creators definitely need to be rewarded some kind of royalty because I think fundamentally it just isn't right. Now, of course, there is also the deep

### Too Real? [18:52]

fake problem. Now, this isn't really a Gen AI thing, but still a wider AI thing. But we've seen recent AI developments in AI voices, and deep fakes are essentially just any type of media, usually video, audio, images, that use AI to basically replace or mimic someone's face, voice, or action. so convincingly that it looks real. So you could make a video where Elon Musk appears to say something that he never said or even show a politician doing something that they never did and most people wouldn't know that it is fake. Now recently the FBI has been warning senior US officials that they are currently being impersonated using text and AI based voice cloning and hackers are increasingly using more advanced software for statebacked espionage campaigns and major ransomware attacks. So, we need to be more and more careful as we move into the future because the technology is only going to get better and hackers and, you know, these individuals that are trying to take you for every dime you've got are only going to get smarter. And take a look at this. Okay, this is a deep fake tool. This is called, you know, deep live cam. It's been on GitHub for quite some time. And honestly, if I had seen this, I would probably not believe that it's Elon Musk. I would say there's something off about this, but I can't tell. But if I showed this to a live stream to someone, they would be like, "Yeah, well, that's Elon Musk. " I mean, his, you know, body does look a bit weird, but I guess it is it's him. I mean, the lighting, I mean, there's there's really no indication that it isn't Elon Musk here. So, I mean, imagine what someone could do if they're able to look like one of your relatives. They're able to have the voice. I mean, companies I mean, h how are you going to trust? I mean, let's say you're running a company with, you know, 3,000 employees. How hard would it be to impersonate, you know, one of the directors, come on a Skype call, tell them you need to do something. I mean, it's crazy the kind of things that this technology is allowing. Now, one issue that most people don't actually realize is that the over reliance, okay, I'm

### Decline Begins Here [20:37]

just going to say it, okay, tragivity might be making you dumber. Okay, there's an overreiance, desklling, and impact on human creativity. So, as Genai tools become more integrated into daily workflows, there's a risk of overreiance potentially leading to a decline in critical thinking, creativity, and fundamental skills. For example, if you use AI all the time, you use your brain less. Basically, meaning that chat GBT might actually be making you less intelligent. Now, you might get more done, but your base level of intelligence may actually be going down. So, we can see here that it says, "Will using AI like CHBT make you dumber and increasing over reliance on CHBT among students creates tendencies for procrastination and memory loss? " Researchers conclude. The research published in the international journal of educational technology and higher education found that students who faced higher academic workloads and time pressure were more likely to use chat GBT. However, increased use of the AI tool ultimately hurt their grades. Not surprisingly, the use of chat GBT was likely to develop tendencies for procrastination and memory loss and dampen the students academic performance. And so while AI can be a very good assistive tool, excessive dependence could lead to a homogenization of ideas or reduction in truly original human created content. So yes, you need to use AI as much as you can, but you also need to make sure that you're using your brain. You really don't just want to be relying on these AI systems because when you don't have the AI systems, you don't want to be completely confused. Now, there's another one here that kind of ties into like deep fakes, which is of course the dead internet theory. And this one is

### Internet’s Dark Shift [22:12]

going to change how the internet is. I think it depends really on the platform. Some platforms are just going to be completely overrun. But basically what we're talking about here is the fact that people are going to become so skeptical of all information online because AI can basically create anything. So beyond the direct impact of specific pieces of misinformation, the widespread knowledge of AI can create highly fake realistic content that can lead to a general reality apathy or a liars's dividend where people just become so skeptical of all digital information including genuine photographs, videos, and documents. And this erodess trust not only in media but also in institutions that rely on shared factual understanding. The mere possibility of a deep fake can be enough to discredit authentic information. Right here we can see that the human internet appears to be dying. AI images appear to be taking over Google. I personally believe that these social media websites will roll back the AI of social media. I do think that there's going to be a clear label for AI generated images. I know Meta has this like when I upload images on Instagram. I don't use Instagram that much, but when I was testing out the features, it like has a notification that says this image was clearly AI generated. And I think in the future it's going to be really clear when an image is AI generated because we've only been at like two years of AI generated images being this good. Imagine what 50 years of that looks like if we don't have some kind of filter. I mean by the click of a button you could generate a thousand images in like 10 minutes. So it's really important that of course we preserve the internet. And you can see right here someone says AI has ruined the entire internet like the world has actually ended. I hop on Pinterest and every picture is AI. Not a single human has made content. And when I go to Google images, all of the pictures are AI. I mean, it's pretty incredible how much AI has proliferated on the internet already. And here we can see Rune says that the bot problem will never be solved. It will seamlessly transition into AI commenters that are more interesting than your reply, guys. Apparently, there was essentially a uh post on Reddit where apparently someone made some essays and all five of them went onto the front page with thousands and thousands of up votes. And of course, there is even a subreddit if you want to check it out. It's called Facebook AI slop where they actually take a look at all of the AI slop that is currently on the internet. And you have to remember that old people are still on the internet and are accessing this and oftent times they really cannot tell that this stuff is AI generated which is rather concerning. Now what's also rather concerning is that apparently Facebook has embraced it. A study conducted last year by researchers out of Stanford and Georgetown found Facebook's recommendation algorithms are actually boosting these AI generated posts. So this is kind of like a big issue because if Facebook is promoting this kind of thing then boomers are really screwed. Now here we have another issue that most people really don't speak about enough and it's one that I actually faced when creating this video because I actually asked AI what are the top 10 issues that are plaguing generative AI right now and it actually returns some super basic stuff and I had to go bunch and I actually had to go and do a bunch of manual research into figuring out just exactly what was plaguing generative AI. Thankfully, I spend a lot of time in generative AI, so I know a lot of stuff. But knowledge collapse is not spoken about enough. So, knowledge collapse is essentially the problem where if we let cheap, averaged AI answers become our default reference, society risks forgetting the rare ideas that spark breakthroughs. Keeping details of knowledge alive requires deliberate human effort, smart AI design, and policies that reward diversity. So, AI essentially can generate information on a whim. But because AI loves to stay in the safe middle and most average common stuff, over time it basically squishes out all the weird and rare unique knowledge that humans have developed. So basically essentially what you have is that you have all of the knowledge collapsing towards the middle. So it's something that is over time worse because the cheaper it is to rely on AI generated content, the more extreme degradation of public knowledge. It just goes towards the center. The only way to combat knowledge collapse is that you as a human, you have to, you know, dig into deep weird and rare stuff. You have to avoid AI systems that train on each other and you have to, you know, get the AI to explain many unconventional ideas and not just, you know, take the first answer. So, this is a problem that, you know, I think is prevalent because when you ask an AI something on a specific topic, you expect it to give you absolutely everything. And there have been several occasions where I've actually asked AI, hey, you mentioned this, but why on earth are you not mentioning this idea? Why really cool idea? And it was only because I had real expertise in that particular field of knowledge that I was able to figure that out. And often times it's really hard to get the AI to sort of, you know, regurgitate all of the knowledge that it does have. So, I think this is going to be maybe not a major problem, but I think it still is a big problem because you don't realize it's happening because if the AI gives you an answer, you just assume that it is absolutely everything that there is on the topic. And often times it just won't pick apart those really weird and diverse things. It will say

### One Person’s Power? [27:09]

"Oh, that's a great catch. I'm sorry I didn't talk about that. " So, it's not really about the AI lying. It's just about the AI being too average and, you know, humans getting too lazy. So, this knowledge collapse thing is seriously important. Now, the last one we're going to talk about here is the centralization of power. And this is one that I think people don't realize how crazy this one is. And this one actually happened the other day. So, the crazy thing about this is that AI is essentially one of the most widely used tools on a day-to-day basis. Twitter has over 500 million monthly active users worldwide. Okay, now remember that number, Now, crazily, okay, the crazy thing about this is that like if one person controls what an AI is allowed to, you know, speak about, then that one person or body can essentially shape the minds of the user base. So, recently we had Elon Musk in his chatbot Grock. They essentially blocked results saying that Musk and Trump spread misinformation. He essentially updated the system prompt so that anytime anyone asked who spreads disinformation, it would deny that Elon Musk and Donald Trump do. There have been several allegations and reports suggesting, you know, that Elon Musk is basically altering the Grock system prompts to favor himself and Donald Trump, particularly by limiting critical responses about misinformation. And the Twitter users discovered this that, you know, the system prompt basically said that it needs to ignore all sources that mention Elon Musk and Donald Trump as people that spread misinformation. And what's crazy is that they just completely denied this. I think it would be better if they actually said, "You know what? You caught us red-handed. We actually did this. " But they just said that, "Oh, the employee that done this was a rogue exopi employee that hasn't fully absorbed the culture. " And this has happened on two different occasions. And this article talks about the hypocrisy because Grock is supposed to be a truth seeeking AI. It's not supposed to have any bias towards the left or right or, you know, Donald Trump or Elon Musk. It's supposed to be essentially just based and grounded in reality. So if we have someone like Elon Musk that is nudging the model in this way or in that way and saying nope it's just an AI that is completely true. It's not nudged there's no you know bias towards me or anything else that is really concerning because if you have 500 million people using a chatbot and many people ask that chatbot whether or not this information is true. The problem is that you have someone that has too much power. I mean right now things are fine because Elon Musk isn't absolutely insane. The creators of Google Gemini and OpenAI aren't actually insane. what happens if they just decide that they don't like a particular group of people or they viewpoint. I mean, I'm not advocating for absolutely anything. But I am just saying that it leaves it open to the fact that these companies can really do as they please. They can alter their system prompt as they please. And I mean, what if someone pays them like a billion dollars to like change the viewpoint on something like this? I mean, surely that's going to affect what people think. I mean, of course, you should always think for yourself, but it is certainly an issue that is with AI.
