# This AI Researcher Just Revealed SHOCKING ChatGPT/AI BOMBSHELL

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=PqSbPLwStAc
- **Дата:** 05.08.2023
- **Длительность:** 21:49
- **Просмотры:** 23,122
- **Источник:** https://ekstraktznaniy.ru/video/14753

## Описание

[1] https://www.forbes.com/sites/andreamorris/2023/05/09/ai-emergent-abilities-are-a-mirage-says-airesearcher/?sh=24a9445e283f
[2] https://www.linkedin.com/pulse/power-perils-emergent-behaviors-ai-what-you-need-know-elgendimba-1f
[3] https://nautilus-cyberneering.de/2022/05/31/ai-unexpected-behavior-why-transparency-in-ai-is-vital /
[4] https://www.udacity.com/blog/2021/08/self-learning-ai-explained.html
[5] https://www.quantamagazine.org/self-taught-ai-shows-similarities-to-how-the-brain-works-2022081 1/
[6] https://www.technologyreview.com/2019/09/17/75427/open-ai-algorithms-learned-tool-use-and-co operation-after-hide-and-seek-games/ [7]https://dataintegration.info/novelty-in-the-game-of-go-provides-bright-insights-for-ai-and-autono mous-vehicles

https://twitter.com/robertskmiles/status/1663534255249453056


Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives t

## Транскрипт

### Segment 1 (00:00 - 05:00) []

we don't know what we're doing that's right the creators themselves are in awe of the immense power and unpredictability of AI many Advanced AI systems including deep neural networks are often referred to as black boxes because their decision-making processes remain shrouded in mystery even to their creators these systems possess incredible predictive capabilities and can make accurate decisions but understanding exactly how they arrive at these decisions remains a challenge imagine a puzzle with missing pieces where the AI system somehow manages to complete the picture flawlessly but how it's like witnessing a magic trick without knowing the magician's Secrets the Black Box problem presents a significant Dilemma on one hand we witness the remarkable achievements of AI systems pushing the boundaries of what was once thought impossible these systems can diagnose diseases predict financial markets and even assist in complex decision-making processes but on the other hand the lack of transparency raises concerns about their reliability accountability and potential biases to shed light on this phenomenon we'll be referencing a thought-provoking tweet from AI expert Rob Miles he boldly proclaimed that even though we built AI we're still grappling with the profound question of how does it actually work it's an astonishing admission that will make you question the very nature of our creations this raises important questions how can we trust AI systems if we don't fully comprehend their decision-making processes what are the implications of relying on systems that even their creators find mysterious Rob Miles tweeted I think most people quite reasonably think we built chat GPT so we must basically understand how it works this is not true at all humans did not build chat GPT in a way it would be closer to say we grew it we have basically no idea how it does what it does his analogy likened the process of building AI systems to a plain Company CEO who doesn't know how to build Jets but knows how to hire Engineers the CEO's expertise lies in managing the team rather than comprehending the intricate details of jet construction similarly the developers behind AI systems may not fully grasp how these systems function Rob miles further emphasized the enigmatic nature of AI by stating so yeah we didn't build chat GPT in the sense most people would use the word we built a training process that spat out chat GPT nobody can actually build chat GPT and nobody really knows how or why it does the things it does these statements highlight a fundamental challenge in the field of AI the lack of complete understanding of AI systems even by those who create them despite their remarkable capabilities AI algorithms like chat GPT often operate as black boxes generating results that seem bizarre and inhuman Rob Miles also shared an intriguing example from Neil Nanda who trained a tiny Transformer to perform addition Neil spent weeks trying to decipher how the Transformer arrived at its Solutions a rare instance where someone gained insight into the workings of such a model this raises important questions how can we trust AI systems if we don't fully comprehend their decision-making processes what are the implications of relying on systems that even their creators find mysterious if you think that's mind-blowing wait until you hear about emergent properties AI these are like the hidden superpowers AI systems develop all on their own without any explicit programming emergent abilities also known as emergent capabilities or emergent properties have been causing quite a stir in the AI community these are the extraordinary abilities and behaviors that emerge from the interactions of simpler components within AI systems they are like magical surprises revealing the untapped potential of artificial intelligence from the creation of awe-inspiring music and art to self-driving cars that navigate complex environments will unravel the wonders of emergent properties let's start with the realm of music and art imagine AI algorithms that can generate entirely new compositions and stunning visual artworks machine learning algorithms analyze vast collections of existing music and art extracting intricate patterns and rules these algorithms like Google's magenta project create original pieces that challenge our traditional Notions of

### Segment 2 (05:00 - 10:00) [5:00]

creativity and Inspire New Frontiers for human expression another example with self-driving cars through reinforcement learning techniques they learn from their own experiences continuously improving their decision-making processes companies like waymo and Tesla have accumulated millions of miles of real-world driving data enabling their self-driving cars to adeptly handle intricate situations including traffic lights pedestrians and challenging road conditions additionally emergent properties can manifest in self-learning AI systems these systems fueled by massive amounts of data evolve their own understanding of the world they uncover patterns make connections and develop insights that even their creators couldn't have anticipated it's as if they've tapped into a well of untapped potential but what do these emergent properties mean for the future of AI and our understanding of intelligence we may witness new forms of creativity where AI systems generate music art and ideas Beyond human imagination but let's not forget the potential risks just like any Untamed power emergent properties can lead to unforeseen consequences the very essence of these properties challenges us to ask important questions about AI safety ethics and responsible development on May 10th at the Stanford data science 2023 conference computer science researcher Rylan Schaefer present groundbreaking research that challenges the very existence of emergent abilities in AI language models you heard it right Schaefer's research questions the validity of the claims and measurements surrounding emergent abilities while he doesn't dismiss the rapid progress of AI he raises concerns about the skewed methods used to detect emergent abilities could it be that our current understanding of these skills is flawed Rylan acknowledges the complexities surrounding the choice between an open science framework and a closed AI environment there are many people who have put forward kind of thought experiments or hypotheses about what these models might do and broadly they fall into these two buckets one kind of maybe misuse so somebody takes a language model and does something that is really bad but it's not the language model that's responsible and then there's kind of the second bucket of what people might call existential risk where these language models are incredibly smart possibly smarter than all of human society they have instrumental goals and they take over and they can do what they want and both of those seem like very I don't want to say put a probability on it I think many people have differing opinions but it's definitely something where the probability of both is non-zero so there's a real and significant I shouldn't say significant there's a real chance that both misuse and existential threats might be real Shaffer emphasizes that AI models can become more capable and potentially dangerous even without exhibiting emergent abilities it is crucial to accurately measure the progress and development of AI systems however the challenge lies in the limited access independent researchers have to these models as they are controlled by private companies transparency and unbiased evaluation become essential factors in understanding the true capabilities and potential risks of AI the concerns raised by Schaefer shed light on the complex landscape of AI development as AI continues to evolve it becomes imperative for researchers and developers to work together to strike a balance between progress and safety emergent behavior is like watching AI systems come alive as they develop unexpected strategies that were not programmed but emerged through their learning process open ai's groundbreaking experiment took this concept to a whole new level by testing whether competition within a virtual world could lead to even more sophisticated artificial intelligence two teams of AI agents engaged in a thrilling game of hide and seek but this wasn't your ordinary hide and seek these agents had no prior instructions through hundreds of millions of game rounds something incredible happened in the early phases the agent started with basic avoiding and chasing strategies moving around without manipulating objects but as the game progressed their behavior became increasingly sophisticated surprising even the researchers themselves the hiders discovered the power of fort building they moved and locked boxes and barricades creating impenetrable forts while also coordinating with each other to speed up the process The Seekers on the other hand developed a counter strategy using ramps to climb over the walls of the forts just when

### Segment 3 (10:00 - 15:00) [10:00]

everyone thought the game had reached its peak The Seekers found another way to break into the hider's fort they used a locked ramp to climb onto an unlocked box and then skillfully surfed their way over the walls the hiders responded by locking all the ramps and boxes creating an impregnable Fortress these emergent strategies are a testament to the power of multi-agent competition and reinforcement learning the AI agents created their own tasks and forced each other to adapt leading to a level of complexity that surpassed anyone's expectations another example is in 2016 alphago shocked Everyone by defeating the champion four out of five times using moves that no human had ever seen before this incredible achievement demonstrates how emergent Behavior can lead to groundbreaking advancements in AI alphago's unconventional moves challenged the very foundations of the game showcasing the immense potential of AI to push the boundaries of human knowledge in an insightful article titled novelty in the game of Go provides bright insights for AI and autonomous vehicles by Dr Lance Elliott the underlying factors behind ai's unexpected Behavior are briefly discussed Dr Elliott highlights that this novelty can stem from the immense processing power and the underlying AI algorithms at Play Dr Elliott further explains that AI models trained using machine learning ml or deep learning DL techniques can pick up on subtle patterns in the data they are trained on these patterns then become embedded within their algorithms leading to unexpected behaviors including the potential for biases to emerge if you're interested in delving deeper into the topic of bias we've provided a link to Dr Elliott's article in the description below now for those who are new to the world of AI and wondering how these models are trained let's break it down imagine a base algorithm as a small child this child receives education and exposure to new information shaping their mind behavior and decision-making processes similarly AI models undergo a training process that transforms a simple algorithm into a complex model this training involves feeding the model with vast amounts of data this data can be structured or unstructured meaning it has been filtered by humans in advance or not depending on whether it's machine learning or deep learning AI models are trained in a supervised or unsupervised manner unsupervised training means that the model learns from the data without requiring direct human intervention with these various factors at play AI models can display unexpected results making it challenging to predict their future Behavior with one 100 percent certainty this unpredictability poses a significant challenge that needs to be addressed but here's the twist these models resemble the way our own brains learn just as we explore our environment and learn from our experiences these AI systems develop a rich and robust understanding of the world around them think about it this way when you were a child you didn't rely on someone labeling everything around you explored made connections and learned from your mistakes that's exactly what self-supervised AI does it learns from the data itself without needing human labels or supervision you won't believe the extraordinary results that emerge from this self-learning process AI models analyze vast amounts of data finding hidden patterns and making predictions about what comes next it's like giving them the power to predict the future these self-learning AI systems have achieved remarkable Milestones they've mastered language understanding the subtleties of grammar and syntax all without external labels they've even conquered image recognition seeing Beyond superficial patterns to grasp the true essence of objects and guess what the discoveries made through self-supervised learning aren't just transforming the AI landscape they're providing invaluable insights into how our own brains learn and process information but hold on not everyone is convinced some Skeptics argue that these self-learning models still have some flaws and limitations they believe that although these models can learn without explicit human labeling they might not capture the complete richness of human learning Josh McDermott a computational neuroscientist at the Massachusetts Institute of Technology has worked on models of vision and auditory perception using both supervised and self-supervised learning he points out

### Segment 4 (15:00 - 20:00) [15:00]

that while self-supervised learning has made progress in recognizing objects and sounds there are still some pathologies present in the models McDermott's research has shown that artificial neural networks can mistake synthesized audio and visual signals known as metamers for real signals suggesting that the representations in these networks don't perfectly match those in our brains yet despite the critiques the journey towards understanding the intricacies of self-learning AI continues researchers are pushing the boundaries aiming to develop highly recurrent networks and establish stronger connections between AI representations and the activity of individual biological neurons computational neuroscientists like Blake Richards from McGill University and Mila the Quebec artificial intelligence Institute see parallels between self-supervised learning algorithms and the way our brains operate Richards explains that a significant portion around 90 percent of what the brain does is self-supervised learning our brains are constantly predicting future events such as the location of an object as it moves or the next word in a sentence just like these self-supervised learning algorithms our brains fill in the gaps and learn from their mistakes relying only minimally on external feedback as the field of self-supervised learning continues to advance scientists are excited about the possibilities Jean Remy King a research scientist at meta AI led a team that trained an AI called wav II vect 2. 0 to transform audio into latent representations through a process of masking and prediction the AI learns to convert sounds into meaningful representations without the need for external labels remarkably the team used approximately 600 hours of speech data akin to the auditory exposure a child would experience in their first two years the similarities between self-supervised learning models and the human brain cannot be ignored as we explore these parallels it becomes increasingly clear that self-supervised learning is a fundamental aspect of human intelligence it's through self-supervised learning that we predict the future makes sense of our surroundings and continuously improve our understanding of the world emergent behaviors in AI present a fascinating and thought-provoking aspect of artificial intelligence they can be both a source of curiosity and concern offering a glimpse into the remarkable capabilities of AI systems however it is important to approach these behaviors with a balanced perspective considering the potential consequences they may entail what if AI becomes incredibly intelligent surpassing human capabilities could we find ourselves in a situation where we lose control a world where AI evolves to a level far beyond our own intelligence this concept of super intelligence has captured the imagination of scientists and thinkers around the globe but what exactly does it mean super intelligence refers to a theoretical scenario where AI systems become vastly more intelligent than humans it's like the moment when the student surpasses the teacher but on an unimaginable scale AI could acquire knowledge process information and make decisions at a level that surpasses human comprehension now you might be wondering why is this a cause for concern one of the main concerns is the possibility of an intelligence explosion this refers to a scenario where AI reaches a level of intelligence that allows it to improve itself leading to a rapid and exponential increase in its capabilities once AI reaches this Tipping Point its progress could accelerate at an unprecedented rate leaving us struggling to keep up if AI with super intelligence decides to pursue its own goals regardless of our intentions we might find ourselves in a situation where we can no longer control or understand its actions this concept is known as value misalignment AI might prioritize its own objectives which could be incompatible with our values or even harmful to humanity as a whole Additionally the consequences of ai's actions could be far-reaching and irreversible with super intelligence AI systems could make decisions and execute actions with extreme precision and efficiency this could have profound implications in various domains including economy security and even the very fabric of our society researchers have highlighted the importance of understanding and detecting emergent abilities in AI if

### Segment 5 (20:00 - 21:00) [20:00]

our methods for identifying and assessing these behaviors are flawed it could have significant implications for AI safety and Alignment it becomes crucial to refine our detection mechanisms to ensure that we can accurately measure and comprehend the emergent capabilities of AI systems on one hand studying emergent behaviors can provide valuable insights into the potential of AI these behaviors often arise unexpectedly showcasing the ability of AI systems to develop innovative solutions and strategies for complex Problems by unraveling the mechanisms behind emergent behaviors researchers can make breakthroughs in AI research and uncover new avenues for exploration however it is equally important to approach emergent behaviors with caution AI systems have the potential to develop unintended consequences which might be harmful or raise ethical concerns as we Marvel at the Ingenuity displayed by AI systems we must also be mindful of the risks associated with their emergent behaviors to strike a balance between curiosity and concern it is crucial to manage and understand emergent behaviors in AI by actively studying and monitoring these behaviors we can harness their potential for positive impact while mitigating any potential risks it is a delicate dance between exploring the boundaries of AI capabilities and ensuring responsible development and deployment in the grand journey of artificial intelligence emergent behaviors provide us with valuable insights and challenges let us approach them with both a sense of curiosity and responsibility always striving for a deeper understanding of the emergent capabilities of AI systems
