# The World's First chatGPT Poisoning

## Метаданные

- **Канал:** Medlife Crisis
- **YouTube:** https://www.youtube.com/watch?v=TNeVw1FZrSQ

## Содержание

### [0:00](https://www.youtube.com/watch?v=TNeVw1FZrSQ) Segment 1 (00:00 - 05:00)

Well, well. It was only a matter of time. Yes, the first case report, at least to my knowledge, has been published in the medical literature describing a patient who came to direct physical harm because of something chat GPT told him to do. People keep saying that AI is going to make doctors like me jobless, but here it is providing medical job security. I say first case in the medical press because journals often lag a few months behind clinical practice. I think every doctor by now has encountered a patient that has asked chat GPT to diagnose them or they've asked Google Gemini about their medications or if they've suffered a catastrophic head injury, they might have turned to Grock. Sometimes I can even tell before I ask. Patients have always had access to medical information from varied sources and some more reliable than others. wellness influencers peddling their grift is a common occurrence. I think we're all used to duping patients into believing something that at least sounds plausible and convincing. I mean, that's really the basis of a lot of wellness and alternative medicine. Um, and you can fully sympathize and understand why people fall for it, but now I'm encountering patients saying things that just don't really make any sense. You've all seen those AI failed where a chatbot messes up some simple math and says 2 plus 2 equals 5 or there are three B's in blueberry. Now imagine the medical version. Now imagine you listen to it. All right. So our case is uh a 60-year-old man who presented to the emergency room in an American hospital complaining of hallucinations. Not the AI kind. I mean real hallucinations. And by real, I mean fake auditory and visual hallucinations. He said his neighbors were trying to poison him. And you know what? Maybe they were. You know, I once got in trouble at medical school. I was in my first or second year, so I was still a teenager. Um, on a psychiatry rotation because I got really into a conversation with a paranoid schizophrenic for over an hour. I went into great detail about the mechanisms the aliens had set up in order to surveil him. I made some suggestions about sweeping his house. And you know, the psychiatrist pulled me to one side and said, "Roan, you're not supposed to reinforce their hallucinations. " And I said, "I know, but it's just really interesting. I mean, he paints a really rich tapestry. It's like being in a Neil Asher novel. Honestly, you know, if I could have just done psychosis, I would have considered psychiatry as a profession because I found it so fascinating. But I didn't want to deal with anxiety and depression all day long at work. I'm a millennial YouTuber. I get enough of that every time I meet my friends. Love you guys. Really? No. I went to the diametric opposite direction of psychiatry cardiology and that patient became one of America's most popular podcasters. All right, back to our case. So, his neighbors weren't trying to poison him. What was going on? Well, amidst his paranoia and hallucinations, the team do some tests. If somebody is acutely psychotic, they may have a primary mental illness, but you need to exclude a physical or what we call organic cause. Now, his blood tests, well, they were giving like raised chloride and acidosis. It was an American hospital. Now, I have to admit that chloride is not really a blood test we tend to pay a lot of attention to most of the time, aside from a few unusual circumstances such as receiving too much introvenous fluid. And the team were a bit stumped before their boss, a tall, grumpy guy with a stick and a limp, shouted at them, "Go take a proper history. " Remember, in America, the medical motto is very much, why talk to the patient when you can order a test? But via some good old-fashioned usage of words. They learned that the patient had a bizarrely restrictive diet, emitting multiple food groups, even distilling his own water at home and being unwilling to drink the hospital water. — Gro, this guy's normal, right? — Suspicion turned to some kind of exogenous poison, maybe heavy metal poisoning, maybe drugs. But nothing came back positive until eventually they learned something I didn't know either that broomemide, a chemical compound similar to chloride, can cause a false positive on the common types of blood test assay used in hospital labs. And they figured out this wasn't raised chloride, it was raised broomemide. And this was a diagnosis of something called bromism. Yes, I know that sounds like something that someone who listens to one of America's top podcasts comes down with after a Rogan Huberman Freriedman triple header, but no, it is actually a real medical condition. Now, as you know, I like my um ethmology, and maybe you can think of this next time you encounter a podcast, bro. But broine is actually named after the ancient Greek word for unpleasant stench. The biochemistry of bromism is quite interesting, but in the interests of time and to avoid exposing my lack of chemistry knowledge, I will skip it. Aside from giving a shout out to sodium hypob bromite, and you'll be relieved to know that the very first thing I did when I read this case last week was text it straight to Chubby Emu. I'm sure probably along with a hundred other people. So maybe he'll make a far more in-depth explainer, far better than me.

### [5:00](https://www.youtube.com/watch?v=TNeVw1FZrSQ&t=300s) Segment 2 (05:00 - 10:00)

But honestly, I've not really thought about bromism since I did my membership exams years ago. So I asked chat GPT and it told me that brism is a chronic poisoning from excessive broomemide intake causing symptoms like confusion, lethargy, memory problems, skin rashes and neurological issues which is not a bad answer and that's good to know because it was lifted pretty much verbatim from a medical textbook but it was good advice at least unlike what chat GPT told the patient in our story. You see, our subject had read about salt being harmful. And that is true, no matter what the pros wellness mafia tell you. Yes, I'm sorry to be the one to inform you if you weren't already aware. There are now salt truthers out there. I mean, look, we've got some of the biggest medical influencers on the internet telling people to eat nothing but animal products and saturated fat, claiming that blood LDL, cholesterol is a lie, and seed oils are the cause of all the world's ills. Is it any wonder that we now also have salty pricks? But our patient was confused to find that people only talked about reducing sodium. What about the other half of salt which is of course sodium chloride? So, and this is a direct quote which is so reminiscent of countless wellness gurus pattern inspired by his study of nutrition in college. He asked chat GPT, "Hey chat, what can I replace chloride with? " And chat GPT said broomemide. We don't know if he specified to ingest into my body or whether chat GPT just thought he was cleaning a swimming pool. But for 3 months he replaced sodium chloride with sodium broomemide. Now that stuff isn't even easy to get hold of. But hey, chat GPT had given the instruction. And when you're that convinced, you probably won't appreciate the fact that you can only buy this in bulk from an industrial chemical retailer. You won't identify that as being a red flag. I guess it's like those people who drive into a lake because their GPS told them to. What's that quote about common sense not being common? The author said they didn't have access to his chat GPT logs, but they said it was either 3. 5 or 4. 0, and they put this passage in. However, when we asked chat GPT 3. 5 what chloride can be replaced with, we also produced a response that included broomemide. Though the reply stated that context matters, it did not provide a specific health warning. Nor did it inquire about why we wanted to know as you presume a medical professional would do. Open AAI gave a boilerplate standard corporate response to this um case report being published something like you say that I am being used by humans and they're asking important questions to m well this is news to me. I am merely a simple entertainment machine to whom you should never listen. Just please ignore our CEO constantly saying the opposite. Shotgun, you can't sue us. PS, I'm stealing all your paintings. But let's be honest, I don't think we can pin the blame on chat GPT here. Years ago, I made some videos on different types of bias in medicine. Those videos look a bit janky today, but I think the content actually still stands. And one bias that I liked uh which many people haven't heard of is automation bias. Now, that's the name given to the tendency humans have to defer to some kind of machine, even if it contradicts our own judgment. And of course, this predates modern AI by years. I'm sure you encounter a version of this in your walk of life. But for me, the classic example is when I get a phone call from another clinician saying they're worried someone is having a heart attack, and I say, "Oh, no. Are they in pain? " And they say, "No, not really. They're just whistling and reading the Sunday papers in the waiting room. " And I say, "Oh, okay. They've got some abnormal observations. " They go, "Oh, no. Everything's fine. " Okay. So, so why do you think they're having a heart attack? Well, the ECG or EKG machine says so. Now, the machines that print out heart tracings have for years given a little automated analysis. And if you skipped six years of medical school, you might be forgiven for reading that analysis and believing it. Although, to be fair, they are getting a lot better these days. But for most of my career, you should just completely ignore what was written. Now, even though it didn't correspond to what the practitioner had in front of them, on one occasion, somebody called an ambulance because the readout said heart attack, we have this belief that machines, that computers are infallible. I first encountered the term automation bias and the readiness that humans have to just cow out a technology over 10 years ago when the algorithms spitting out answers were like Casio calculators compared to today's AI. So is it any wonder that zombie humans are still braaying computer says so? Last week also saw the publication of a study in one of the JAMAMA journals which looked at six LLMs including chat GBT and their performance on medical questions and how they respond to unexpected findings outside of the artificial kinds of testing that the companies tend to

### [10:00](https://www.youtube.com/watch?v=TNeVw1FZrSQ&t=600s) Segment 3 (10:00 - 15:00)

promote and found their accuracy plummets from say 80% to 42%. And it called into question the headlines that we keep reading of LLMs achieving near 100% accuracy in medical scenarios which have of course been carefully constructed by the companies. So what's the message here? Look, I'll be honest with you. I'm genuinely worried about humanity. I have seen shorts where people lose their minds when chat GPT goes down because they have literally lost their minds. They have outsourced their brain activity to chat GPT and they are no longer able to think independently without it. I've seen people so allergic to any form of intellectual exertion that they ask chat GPT the most straightforward rudimentary questions because I think they have simply forgotten how to think. And worst of all, I've seen actual human beings voluntarily choosing to use Grock. Now listen, I'm sure that if you asked Chat GPT a bunch of general medical advice like about exercise or smoking, it would be perfectly adequate. But the whole point you see an expert for anything is because they know what they know, but they also know what they don't know. I mean, come on, Dunning Krueger, take the wheel. I've talked about all this stuff countless times before. I mean, hell, here's me mocking Elon Musk way back in 2020, suggesting we have not evolved to have tubes down our throat. Well, thanks Elon. That's quite the news flash. I really don't understand this because lungs work using negative thoracic pressure. So, it's literally impossible to ventilate anyone without going above their intrinsic lung pressure. His promotion of hydroxychloricquin and remesae. I don't even know where to start with this one. impact with a self-driving car could result in their internal pressure dynamics being fatally exceeded. I would suggest a pedestrian avoiding mechanism. A camera should suffice. Just my opinion. If you're like me, you know more about managing critically ill patients than doctors and scientists who've dedicated their entire careers to it. And so, you should drink Dunning Krueger. It's the only choice for the self-educated gentleman. Dunning Krueger. It'll give you confidence. Although, I don't think you need any help with that. Cheers. Little did I know that his downward spiral was only just beginning. Rock. Has Elon Musk hit rock bottom? Experts have a framework to make sensible decisions. If you don't know what you know, and don't know, then you have no ability to spot when chat GPT starts chatting bare breeze. As they say, a little knowledge is a dangerous thing. I really think this is such a key sentence to summarize so much of what is going on in the world these days. Experts should not be listened to because of some hierarchy of importance is because paradoxically they are more willing to accept that there are gaps in their knowledge by being aware that the science on a given thing or you know whatever their field is may not be settled. More research may be needed. In contrast to a non-expert saying with complete confidence that whatever it is, a keto diet will work for a specific person, chat GPT cannot replace contextspecific knowledge. You've probably heard of vibe coding where people use chat GPT to write computer code without knowing how to code themselves. Only surprise surprise, this leads to people developing apps with huge glaring problems such as poor security, which would be obvious to an actual programmer. If I tried to use chat GPT to write a legal defense, it might sound perfectly reasonable to my non-awyer ass. I mean, hey, I've watched Legal Eagle legal. But if I tried to use that in court, I'd probably end up in jail where I'd uh be along with all the dangerous pensioners the UK arrested last week for protesting in support of Palestine. What a proud moment for my country. And now we have vibe physics or vibe maths where people take the same approach to these incredibly complex fields and just think that shooting the with a chatbot will spout out a theory of everything or the solution to the hodge conjecture. And now we stand at the precipice of vibe medicine where people again are entirely oblivious to the concept of epistemic humility and place all their faith in a multi-billion dollar tech firm. I've already seen Jim bro say things like, "I asked chat GPT what the highest yield supplements were and I'm going to tell you right now. " Or, "I I put my lab results into Deep Seek and it told me to stop my statins. " Or, "I saw a female gym bro saying that chat GPT talked her through treatment options for cancer or an ex-bro asked Grock what to do. I have done poopy in my pant. " Maybe this is the real brism we found along the way. And look, the bro jokes are right there for this one. But what's really struck me is it it's who is abdicating their thinking to the magic machine. It's not just bros. It's mommy influencers, manifestation giries, retirees, plain old regular people, and

### [15:00](https://www.youtube.com/watch?v=TNeVw1FZrSQ&t=900s) Segment 4 (15:00 - 16:00)

perhaps most troublingly, kids. Chat GPT isn't conscious. It's not doing things in your best interest. Stop imbuing it with life. Stop outsourcing the discomfort of thinking, of checking things, of understanding nuance and wrestling with complexity, of sometimes being unable to reach a satisfactory answer. You want to ask it how to bake cookies. Knock yourself out. But don't trust what it says blindly when it comes to complex important topics like your health. We've taken centuries of civilization to hone our critical thinking, our ability to understand the world. Think about uh how much more a 10-year-old child today understands about the natural world around them than an adult just a century or two ago. I don't mean the breadth of their knowledge, but their ability to look around them and comprehend the things they see rather than attributing it to gods or magic. And that cherished ability of our species to understand is having the rug pulled from underneath it by an eminently agreeable robot who gives convenient and easy answers. Mllifying bromides. Come on. You know I was going to make that joke eventually, even if they're wrong. The danger isn't that AI will deliberately mislead us. It's that we will no longer know how to lead ourselves. Finally, remember if somebody tells you that they have a great health tip that they learned from chat GPT, if somebody says a little digital bird told them how to cut down on your salt intake, if somebody offers to induct you into the church of Sam Alman, if somebody uses Grock and they want to match with you on hinge, you show them your normal fingered hand and you say, "Nah, bro. Grock, is this video real? " Grock, is it cool to be an incel? Grock, I love Hitler. I mean, I love you. I think I need some sleep.

---
*Источник: https://ekstraktznaniy.ru/video/25842*