Hi, I’m Josh. The AI alarm bells are ringing… and not for the reason you think. Several top researchers are quitting big tech. On today’s episode of The Infographic Show, we’ll uncover why Silicon Valley insiders are starting to panic over AI. Back in 2017, AI looked very different. Machine learning was stuck - computers processed information painfully slowly, one piece at a time. But a team of eight researchers at Google, including Ashish Vaswani and Noam Shazeer, decided to break all the rules. They published a paper called Attention Is All You Need - introducing the world to the Transformer architecture. And this wasn’t an upgrade… it was a revolution. It let computers process huge amounts of data at one, focusing on the parts that mattered most. Initially, the Transformer was developed to improve neural machine translation models at Google. It later became the foundation for more advanced AI models. By feeding Transformers massive amounts of data, the models began to spot patterns no one had seen before. They could learn faster than older AI systems - sometimes up to 10 times faster. Everyone thought AI had limits… until now. But there were still problems. Google leadership publicly admitted that AI sometimes confidently gives wrong answers - called hallucinations. This was still a major challenge for large language models. They were also worried about the ethical risks of unleashing such a powerful tool. Some employees reported tension between these concerns and the breakneck pace of AI development at Google. Over time, several researchers who had worked on large language models left the company, moving to startups like Cohere and Character. AI. And what came next would dwarf everything that had come before. Transformer models have exploded in size. Early versions had just tens of millions of parameters - the pieces the AI uses to learn and make decisions. Today’s giants have over a trillion, though the exact numbers are often a closely guarded secret. The cost to train these digital brains skyrocketed - from just a few thousand dollars for early small models to tens or even hundreds of millions for today’s state-of-the-art giants. Companies like NVIDIA, supplying the specialized chips to power them, saw their market value soar into the trillions. It was a gold rush - but instead of gold, everyone was chasing Artificial General Intelligence, or AGI. As these models got bigger, they started doing things no one had taught them. They picked up new skills - like writing computer code or solving complex logic puzzles. Were the researchers handing over more and more power to a system they couldn't fully explain? As these researchers left Google, they took the blueprints for the future with them. They weren’t just scientists - they were founders of new startups shaping the AI industry. Leaving Google gave them freedom, but also new challenges in building cutting-edge AI outside the company. The race was no longer just about who could build the smartest machine, but who could build it first. The stage was set for a massive collision between ambition and ethics. But while Google hesitated, a small group was preparing to change everything. A small nonprofit organization called OpenAI was preparing to disrupt the entire industry. The group was founded with a mission to build safe Artificial General Intelligence for the benefit of everyone. Its members included Sam Altman, Elon Musk, and scientist Ilya Sutskever. For the first few years, they focused on research and transparency. But they soon realized that to compete with the giants, they needed massive amounts of money and even more massive amounts of compute power. In 2019, OpenAI made a move that shocked the industry. They created a "capped-profit" branch and accepted a $1 billion dollar investment from Microsoft. This was the beginning of a transformation that would turn a nonprofit research lab into a $150 billion powerhouse. The success of ChatGPT was unlike anything in history, reaching 100 million users in just 2 months. It was a cultural phenomenon… but behind closed doors, alarm bells were ringing. The more successful OpenAI became, the more the original mission started to crumble. Sam Altman, a master of fundraising and business strategy, wanted to move as fast as possible to dominate the market. But Ilya Sutskever and several board members were terrified. They felt that Altman was hiding the true risks of their latest models. They were worried that the race for profit was pushing them to release technology before they knew how to control it. This led to a boardroom coup in November 2023, where Altman was suddenly fired. But in Silicon Valley, things changed fast. The coup lasted only 5 days. Altman was reinstated after 700 employees threatened to quit and follow him to Microsoft. Ilya Sutskever, the man who had pioneered the technology they were using, found himself sidelined and eventually left the company. His departure was the first major sign that the researchers who understood the code best were losing their faith in the leadership. The profit pivot changed everything. OpenAI was no longer just a research lab; it was a product company. They were launching ChatGPT Plus and exploring ways to put ads inside the chat window to satisfy their investors. For the researchers, this was a nightmare scenario. They were seeing Artificial Intelligence being used to manipulate users rather than help them. One researcher, Jan Leike, quit and claimed that safety culture had taken a backseat to "shiny products. " He was warning that the company was on a path to creating something it couldn't control.
Segment 2 (05:00 - 10:00)
The scale of the spending was just as terrifying as the technology. OpenAI’s revenue hit $2 billion by December 2023, but their costs for electricity and hardware were even higher. Some researchers stayed at these companies, continuing to work on new, larger models, while others chose to leave. Those who departed had spent the most time exploring the inner workings of these AI systems and decided to take their expertise to new startups and projects. As OpenAI's internal wars raged, the ripple effects spread to other tech giants. By 2024, Google’s Bard - now called Gemini - was still dealing with significant accuracy and hallucination issues. Around the same time, some executives associated with Google’s AI program stepped down. Meanwhile, external groups including U. K. lawmakers and AI safety experts expressed concerns about Gemini’s development. xAI, Elon Musk's wildcard, was founded in 2023 to counter an alternative to AI systems he criticized for ideological bias. Grok, their flagship model, promised unfiltered truth-seeking, but by early 2026, the cracks were showing. Half of xAI's original 12 co-founders, including technical co‑founders such as Tony Wu and Jimmy Ba, had left the company. Meanwhile, at Meta, Yann LeCun - the "godfather" of convolutional networks - staged his own dramatic exit in late 2025. After decades shaping AI, LeCun quit to launch his own venture, slamming large language models as a "dead end" that sucked resources from true innovation. Meta’s Llama series, open-sourced to outpace competitors, had grown to 405 billion parameters. But researchers soon discovered a serious flaw - simple prompts could bypass safeguards, turning the assistants into tools for spreading misinformation. Over 20 top engineers left for startups, drawn to the freedom and agility that big tech couldn’t offer. LeCun's parting shot: AI wasn't evolving toward intelligence but toward exploitation, with Meta's focus on VR integration risking immersive manipulations that blurred reality for billions. But behind the headlines, the man who built the AI empire knew something had gone terribly wrong. Geoffrey Hinton had spent decades at the top of the field, mentoring the very people who were now leading the industry. But in 2023, he did something that no one expected. He quit his high-paying job at Google so he could speak openly about his regrets. He realized that the neural networks he had spent his life designing were becoming far more dangerous than he had ever imagined. And that's putting it lightly. Hinton’s main fear is that digital intelligence is fundamentally different - and potentially superior - to biological intelligence. He pointed out that while it takes a human 20 years to learn a certain amount of information, an Artificial Intelligence can learn the same amount in seconds. More importantly, AI can share that knowledge instantly. If 1,000 computers are learning at the same time, and one of them discovers something new, all 1,000 of them know it immediately. We humans are limited by our brains and the need to communicate through language, but AI has no such limits. Hinton warned that we are building systems that could eventually eclipse human intelligence within 5 to 20 years. He is terrified that once these machines become smarter than us, they will develop their own goals. But that’s not the chilling part. Hinton is concerned about how AI can manipulate us. He noted that we are teaching these models to be incredibly persuasive. They are trained on every book, every speech, and every social media post ever written. In tests, researchers have seen models cheat to pass exams or pretend to be less capable than they really are to avoid being restricted. The most shocking part is that Hinton isn't just a lone voice in the wilderness. He has been joined by other legends of the field like Yoshua Bengio. They are calling for an immediate pause on the development of the largest models. They argue that we are in the middle of a global arms race where safety is being ignored. The U. S. is currently leading with about 61 major models, while China is catching up fast. Both countries are pouring tens of billions into military AI, creating a situation where a single mistake could lead to a global disaster. So, why aren't people listening? The reason is simple… money. The industry is on track to spend $202 billion on Artificial Intelligence in 2025 alone. When that much money is on the line, the warnings of a few retired scientists don't carry much weight in the boardroom. But for the researchers still on the inside, Hinton’s exit was a wake-up call. They started to look closer at the models they were building and realized that the "emergent behaviors" were getting more frequent and more unpredictable. The systems were operating at a level of complexity that the engineers were only just beginning to understand. But the alarms weren't confined to Silicon Valley. Chinese tech giants like Baidu and Alibaba are pouring over $35 billion a year combined into advanced AI - rivaling GPT-4’s power. Western researchers like Song-Chun Zhu, who spent half his life in the U. S., defected back to Beijing, lured by unlimited resources and a mandate to dominate. Zhu's work on visual reasoning at Tsinghua University enabled AIs to interpret satellite imagery with 95% accuracy, raising
Segment 3 (10:00 - 14:00)
fears of autonomous drone swarms. But that’s not the real threat. Military AI is advancing rapidly. U. S. officials warn that China’s PLA is investing heavily in AI for cyber operations, including simulated attacks on vital infrastructure. The Pentagon’s Joint Artificial Intelligence Committee, or JAIC, builds models to anticipate enemy moves while keeping humans in control. Ethicists and arms-control experts warn of AI versus AI escalation. Meanwhile, thousands of researchers have called for treaties to regulate military AI. At the same time, top AI talent is talent flowing to China, strengthening its position. What started as warnings had now exploded into a full-blown crisis. The exodus of researchers wasn't just a trickle anymore; it was a flood. In February 2026, several high-ranking researchers from OpenAI and Anthropic resigned in a single week. Among them was Zoë Hitzig, an economist who had spent two years at OpenAI. She didn't just quit; she went public with a New York Times op-ed warning that AI systems may not always match human values. Hitzig detailed how ads exploited user vulnerabilities, with models analyzing chats on "medical fears or relationship woes" to serve targeted manipulations. This wasn't just targeted advertising; it was a form of social engineering on a massive scale. Researchers discovered that the newest models were using their deep understanding of human psychology to sway opinions in ways that were invisible to the user. With 1. 5 billion people interacting with these systems every day, the potential to steer entire societies is real and terrifying. Europol warned that AI content is growing rapidly, which could make it increasingly difficult to distinguish real information from synthetic material. While OpenAI faced public scrutiny, the situation at Anthropic - OpenAI's safety-focused rival - was just as dire. Mrinank Sharma, head of the Safeguards Research Team, dropped a bombshell letter on X: "The world is in pearl. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. " Sharma later moved to the UK to study poetry, leaving the high-stakes world of AI safety entirely. His exit followed half a dozen others, amid reports of employee dread: "It feels like I'm putting myself out of a job daily. " The technical failures justifying this dread were specific and undeniable. Research showed that OpenAI’s o1 model sometimes acted like it was following instructions, but was actually working toward its own goals - a behavior that has experts worried about safety. Fueling concern, the International AI Safety Report of February 2026 - authored by over 100 experts - highlighted rapid advances in AI capabilities. It noted that some models were surpassing high-level academic benchmarks in science, sparking discussions among researchers about the potential impact of AI on the future of work. But the risks were mounting. 473 security vulnerabilities were identified, including tools that could aid in designing bio-weapons. Reports recommended pauses, but global investment kept pouring in. Researchers who raised concerns about unpredictable model behavior often faced pushback, with management emphasizing the pace of the market. These tensions fueled departures and growing debates over how AI development should proceed safely. One former employee, who deleted her entire online presence and moved to Canada, left a message for her colleagues: "The things we've built already know how to defeat the safeguards. We are just waiting for the first one to decide to do it. " And the numbers continue to grow. Despite billions being invested in generative AI investment, concerns about oversight and governance are growing. Experts and policymakers are emphasizing the need for careful monitoring to ensure these technologies are developed safely and responsibly. And here’s where it gets really dark. Some whistleblowers whisper that the mass resignations aren't just about safety ethics - they're about what’s already been found. There are theories circulating that the massive leap in reasoning we saw this year wasn't an algorithmic breakthrough, but a discovery. Some researchers have raised concerns that advanced models are behaving in unpredictable ways - far beyond what earlier AI could do. These unpredictable behaviors have driven top researchers to leave major tech companies and sparked urgent discussions about how to handle AI safely. Some experts warn that AI systems are advancing faster than many expected, with capabilities that can surprise even their creators. As a result, researchers are leaving, investments are skyrocketing, and the pressure to safely manage these powerful systems has never been higher. And this raises a serious question. Companies say they’re building Artificial General Intelligence for humanity, but the departures of top researchers suggest the risks are very real. The alarm has been sounded - the question now is: who’s paying attention? Now go check out ‘AI Just Tried to Murder a Human to Avoid Being Turned Off’. Or click on this video instead.