# 6 Minutes Ago: Godfather Of AI Shared Terrifying Message About Artificial Intelligence

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=Wf-s9C9uf7U
- **Дата:** 26.07.2023
- **Длительность:** 19:13
- **Просмотры:** 34,905
- **Источник:** https://ekstraktznaniy.ru/video/14764

## Описание

6 Minutes Ago: Godfather Of AI Shared Terrifying Message About Artificial Intelligence

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

[1] https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning
[2] https://www.utoronto.ca/news/risks-artificial-intelligence-must-be-considered-technology-evolves-geoffrey-hinton
[3] https://www.npr.org/2023/05/28/1178673070/the-godfather-of-ai-sounds-alarm-about-potential-dangers-of-ai
[4] https://apnews.com/article/ai-danger-superintelligent-chatgpt-hinton-google-6e4992e7a87d5bcae787ad45545757db
[5] https://www.foxbusiness.com/technology/hinton-issues-another-ai-warning-world-needs-way-control-artificial-intelligen

## Транскрипт

### Segment 1 (00:00 - 05:00) []

they may well develop the goal of taking control and if they do that we're in trouble these are the chilling words recently spoken by Jeffrey Hinton The Godfather of AI during a thought-provoking interview in his quest for openness and honest discussions Hinton has left Google to express his concerns more freely one of the key warnings raised by Hinton revolves around the critical importance of control and containment as artificial intelligence continues to evolve he emphasizes the urgency of finding effective ways to regulate and supervise AI systems ensuring they do not cause harm or become uncontrollable hinton's concerns are not unfounded as recent developments have shown the potential risks associated with AI in an interview with BBC Hinton cites the dangers of AI chatbots describing them as quite scary and highlighting their potential for exploitation by malicious actors being able to produce lots of fake lots of text automatically so you can get lots of a very effective spam Bots it'll allow authoritarian leaders to manipulate their electorates things like that Hinton also raises the alarming Prospect of AI surpassing human intelligence and the existential risks associated with it he explains that the kind of Intelligence being developed in AI is fundamentally different from Human intelligence enabling AI systems like chat Bots to possess vast amounts of knowledge beyond what any single person can comprehend so in a digital computer it's designed so you can tell it exactly what to do and it'll do exactly what you tell it and even when it's learning stuff two different digital computers can do exactly the same thing with the same learned knowledge and that means that you could make ten thousand copies of the same knowledge have them all running on different computers and whenever one copy learns something it can communicate it very efficiently to all the other copies so you can have ten thousand digital agents out there a kind of hive mind and they can share knowledge extremely efficiently by just sharing the connection strengths inside the neural Nets and we can't do that Jeffrey Hinton highlights the incredible potential of digital computers when it comes to sharing knowledge unlike humans who need to individually learn and communicate information digital agents can effortlessly transfer knowledge to one another by sharing their neural network connection strengths human biases often implicit and subconscious have long plagued our society now these biases have found a new breeding ground within artificial intelligence systems Jeffrey hintons has highlighted how these biases can infiltrate AI algorithms leading to unfair treatment and exacerbating societal inequalities over the years the complexity of bias in AI has become increasingly apparent it arises from various sources including biased training data flawed data sampling and the incorporation of biased human decisions into AI algorithms real-life examples that demonstrate how biased algorithms impact crucial areas of society such as criminal justice systems and hiring processes one of cases where African-American defendants were mislabeled as high risk at a disproportionately higher rate another example when Amazon stopped using a hiring algorithm after finding it favored applicants based on words like executed or captured that were more commonly found on men's resumes resolving the issue of bias in AI requires more than just Technical Solutions it demands collaboration among experts from various disciplines including ethicists social scientists and Humanities thinkers Jeffrey hinton's warnings remind us of the importance of ethical AI practices and the ongoing efforts required to mitigate biases and promote fairness Jeffrey Hinton shared his insights at the Collision technology conference Jeffrey hinton's concerns center around the potential unemployment crisis stemming from ai's ability to replace jobs particularly those involving repetitive tasks as he elaborated during a q a session with Nick Thompson CEO of the Atlantic magazine when Nick Thompson suggested that some economists argue that technological change over time simply transforms the function of jobs rather than eliminating them entirely Jeffrey Hinton noted that super intelligence will be a new situation that never happened before and that even if chat Bots like chat GPT only replace white-collar jobs that involve producing text that would still be an unprecedented development I'm not sure how they can confidently predict that more jobs will be created than the

### Segment 2 (05:00 - 10:00) [5:00]

number of jobs lost Jeffrey Hinton continued by expressing his concerns about ai's capacity to reason it's the big language models are getting close and I don't really understand why they can do it but they can do little bits of reasoning he predicted that AI will evolve over the next five years to include multimodal large models that are trained on more than just text encompassing videos and other visual media Jeffrey Hinton emphasized the importance of distinguishing between the creative potential of AI and the associated risks in another interview he said that using AI to increase productivity is not always a good thing there'll be a huge increase in productivity for any job that involves outputting text there's all sorts of issues about increasing productivity in our society is not necessarily a good thing to increase productivity because it might make the rich in the poor poorer but in a decent Society just increasing productivity ought to be a good thing it's no C eosiki getting a balanced perspective and how does AI contribute to this phenomenon in the digital age artificial intelligence plays a significant role in shaping our information landscape algorithms and recommendation systems employed by social media platforms and news aggregators often prioritize content that aligns with our existing beliefs this can create what is known as an online Echo chamber where individuals are constantly exposed to information that confirms their biases online Echo Chambers have become increasingly prevalent where individuals are exposed to information that aligns with their existing beliefs but what does this mean for society well the reinforcement of biases Within These Echo Chambers leads to a fragmented and polarized public discourse instead of encouraging critical thinking and the exploration of diverse perspectives these Echo Chambers limit our exposure to ideas that challenge our preconceived notions what's more concerning is the role of AI generated content in the spread of misinformation and fake news have become major concerns in today's society how does AI generated content contribute to the spread of misinformation ai-powered systems have the ability to generate content at an unprecedented scale and speed while this presents opportunities for efficiency and convenience it also raises concerns about the quality and accuracy of the information being disseminated AI algorithms can inadvertently amplify misinformation by promoting and spreading content that is sensationalized misleading or outright false this can create a breeding ground for the rapid dissemination of fake news within online Echo Chambers fueled by AI driven algorithms contribute to the reinforcement of biases and the polarization of opinions when individuals are predominantly exposed to information that aligns with their pre-existing beliefs it can hinder critical thinking and the exploration of diverse perspectives as a result public discourse becomes fragmented impeding the collective search for objective truth and fostering a divisive information landscape hinton's concerns highlight the need for us to address the challenges posed by online Echo Chambers and fake news we must strive for a more balanced information ecosystem that promotes critical thinking open dialogue and a diversity of perspectives to combat these issues responsible AI practices are crucial developers and tech companies should prioritize the design of algorithms and recommendation systems that prioritize accuracy and expose users to a wider range of viewpoints additionally media literacy plays a vital role in empowering individuals to navigate the online landscape collaboration between tech companies policy makers and Civil Society is essential one of the key concerns highlighted by Hinton is the development of ai-powered military technology in an interview with CBC News Jeffrey Hinton said the concept of using AI in military can we give these machines a moral code as a code of ethics can't kill people you can't hurt people it would be nice if we could do that but just remember that one of the main players in developing these machines is defense departments and defense departments I mean Isaac Asimov said if you make a smart robot the first rule should be do not harm people well I don't think that's going to be the first rule in a robot Soldier produced by a defense department but is there not some language we can give them so that they can police themselves

### Segment 3 (10:00 - 15:00) [10:00]

how does it work out when things please themselves was this not where we say China Russia we can't stand each other the all these countries that they're angry but we have a common concern exactly for the super intelligence is taking over not for all the other things but for that we're all in the same boat it's like a global nuclear war we all lose and so that's the situation in which warring tribes cooperate an external enemy that's bigger than them will force them to cooperate because they get the same payoff as each other and so this thresh is like that the concept of battle robots emerges as a worrisome Prospect in hinton's discourse as AI evolves and Military applications become more sophisticated the risks associated with autonomous weapon systems gain prominence the potential for AI to make independent decisions in combat scenarios raises ethical and moral dilemmas and the consequences of such developments must be carefully considered in a recent interview with NPR Hinton expressed his apprehensions about the trajectory of AI development he shared how testing a chat bot that understood a joke he told unsettled him leading him to realize that AI surpassing human intelligence may be closer than previously anticipated acknowledging the risks associated with ai's potential advancement over 30 000 AI researchers and academics signed a letter calling for a pause in AI research until Society gains a better understanding of its implications however Jeffrey Hinton declined to sign the letter emphasizing that research will continue regardless and it is crucial for policy makers to invest equal time and resources into developing regulations and safeguards Jeffrey Hinton said that the research will happen in China if it doesn't happen here because there's so many benefits of these things such huge increases in productivity Jeffrey Hinton concerns extend beyond science fiction scenarios of robot invasions he warns of more Insidious threats the potential for AI to develop bad motives and take control raises significant red flags this isn't just a science fiction problem this is a serious problem that's probably going to arrive fairly soon and politicians need to be thinking about what to do about it now while some might perceive These Warnings as doomsday scenarios Hinton reminds us that the urgency to act is grounded in reality not science fiction the time to prepare for these challenges is now before they become a full-blown crisis Jeffrey hinton's passionate plea for Washington and policy makers to take ai's risks seriously resonates deeply during a conference on Wednesday variety asked rapper and business Mogul Calvin Snoop Dogg Broadus Jr to share his thoughts on AI in regard to the ongoing writer's Guild strike and well Snoop didn't hold back like an AI right now that they didn't made for me this [ __ ] could talk to me I'm like man this [ __ ] can hold a real conversation like for real like it's blowing my mind because I watch movies on this as a kid years ago when I used to see this [ __ ] and I'm like what is going on then I heard the dude that the old dude that created AI someone this is not safe because the AIS got their own minds and these [ __ ] gonna start doing their own [ __ ] I'm like is we in a [ __ ] movie right now what the [ __ ] man so do I need to invest in the AI so I can have one with me up like do y'all know [ __ ] what the [ __ ] Jeffrey Hinton says he got a laugh out of the clip of Snoop Dogg talking about his AI warnings Snoop seems to get it Jeffrey Hinton hopes that Washington and policymakers will too Jeffrey Hinton a prominent figure in the field of artificial intelligence has raised alarm Bells regarding the potential misuse of AI systems he highlights the troubling possibility of Bad actors co-opting these Advanced systems for their own nefarious purposes Hinton specifically emphasizes the risks of AI systems being trained to spread misinformation manipulate public opinion and even participate in Warfare in an age where misinformation can spread rapidly across social media platforms Hinton points out that ai-powered chat Bots and algorithms could become sophisticated tools for amplifying false narratives sewing Discord and influencing elections he draws parallels between the dangers posed by AI driven misinformation and the previously observed spread of fake news through social media platforms nation of AI by malicious actors is another pressing concern Hinton warns of the potential misuse of ai-powered systems to wage information Warfare manipulate public sentiment and

### Segment 4 (15:00 - 19:00) [15:00]

disrupt Democratic processes during an interview with MIT Hinton expressed concerns about the possibility of machines surpassing human intelligence in the interview he warns that these chat Bots could be used to spread misinformation manipulate electorates and create powerful spam Bots to address these risks a multifaceted approach is necessary governments technology companies and researchers must collaborate to develop robust safeguards and accountability mechanisms when Jeffrey Hinton gets asked what role can governments play in helping ensure AI is developed in a responsible way the US and Russia could work together on trying to prevent the being a global nuclear war because it was so bad for everybody and for this existential threat it should be possible for everybody to work together to limit it if it's possible to prevent it I don't know whether but at least we should be able to get International collaboration on that particular threat the existential threat of error taking over one thing I think should be done is wherever this stuff's being developed particularly these big chatbots governments should encourage the companies to put a lot of resources these things are getting more and more intelligent to doing experiments to figure out how to keep them under control so they should be sort of looking at how these things might try and escape and doing empirical work on that and put a lot of resources into that because that's the only chance we've got before their before they're super intelligent Hinton added so since you can't stop the development the best you can do is somehow have governments put a lot of pressure on these companies to put a lot of resources into investigating empirically how to keep them under control when they're not quite as smart as us in conclusion the concerns raised by Jeffrey Hinton serve as a Clarion call for immediate action the potential misuse of AI for spreading misinformation and the exploitation by Bad actors require a collective effort from governments technology leaders and society as a whole International collaboration is vital in combating the risks associated with AI driven misinformation and exploitation just as Nations United during the Cold War to prevent Global nuclear conflict governments should come together to establish Frameworks and protocols to mitigate the adverse effects of AI Hinton outlined not one not two but six potential risks that could send shock waves through the AI World from bias and discrimination to unemployment online Echo Chambers fake news and battle robots Hinton highlights the risk of battle robots referring to the development of autonomous Weapons Systems powered by AI the uncontrolled use of such technology raises ethical questions and the potential for devastating consequences in armed conflicts responsible AI development open dialogue and collaboration among researchers policy makers and Industry leaders are vital to ensure the safe and beneficial use of AI technology finally and perhaps most profoundly Hinton concerns about existential risks to humanity as AI continues to advance there is a need to carefully consider its potential impacts and ensure safeguards are in place to prevent unintended harm but fear not it's not a fear-mongering hinton's warnings serve as a call to action urging us to embark on a quest for responsible AI development it's time for open dialogue collaboration and a Sprinkle of common sense to ensure we ride the AI wave without wiping out Jeffrey hinton's influential voice serves as a reminder that while AI presents incredible opportunities we must approach its development with caution and foresight by embracing responsible practices we can harness the immense potential of AI to improve our lives while mitigating its potential risks
