# OpenAI SHOCKS Everyone "GODLIKE Powers" and MAGIC Abilities In New AI Prediction

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=r56S2A5QwsM
- **Дата:** 25.02.2024
- **Длительность:** 27:31
- **Просмотры:** 110,652

## Описание

✉️ Join Our Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid
🐤 Follow us on Twitter https://twitter.com/TheAiGrid
🌐 Checkout Our website - https://theaigrid.com/

Links From Todays Video:
https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fkqtkw8xus9kc1.jpeg

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=r56S2A5QwsM) Segment 1 (00:00 - 05:00)

okay so there was another statement by an open AI employee that genuinely did shock me because it specifically altered my timelines in terms of when we're probably going to get AGI and a technology that we didn't even consider real a couple of years ago which is artificial super intelligence that maybe some people didn't even consider so this uh open AI researcher has released uh essentially a list of things that he is predicting and um the things that he's predicting are quite intense so we're going to go through this list because there actually is quite a lot of things to talk about and that you do need to be aware of because a lot of people that saw this list thought about a few things but didn't think about how the industry is going to evolve as a whole so let me tell you why this was actually genuinely a shocking statement because there was one of them that I saw and I was like okay that is a super big deal and that just completely changes my timeline so one of the things that he did state is essentially probably there will be a GI coming soon and that's any year now and this is something that unfortunately doesn't surprise me if you've been paying attention to this bace you'll know that we've had many different instances and inklings of AGI and essentially many of us do kind of feel like we're on the edge of our seat because we know that AGI is just around the corner now that's for a variety of factors like the fact that openi has been you know in a really strange position with it you know delaying releases on multiple products with it having the Sam Alman 5 ing um and other companies are also having major breakthroughs and working on AGI as well so um literally any year now we could definitely be getting AGI not only just because of the way that these companies are but the significant investment that they're also getting too now he also uh spoke here and you can see that he responded to someone's question in terms of the percentage of AGI so he said why do you have a 15% chance for 2024 and only an additional 15 for 2025 now I do think we get AGI by the end of 2025 or at least you know some kind of lab makes an insane breakthrough and we have AGI by the end of 2025 that's just what I kind of believe but he says do you really think that there's a 15% chance of AGI this year and he says yes I really do I'm afraid I can't talk about all of the reasons for this you know I work at open AI but mostly it should be figureoutable from the publicly available information which we've discussed several times on this channel now my timelines were already fairly short 2029 is the median which is essentially the most common and when I joined openi in early 2022 and the things gone mostly as I've expected I've learned a bunch of stuff some of which updated me toward ws and downwards as for the 15 10 16% thing I don't feel confident that those are the right numbers rather those numbers Express my current state of uncertainty I could see the case for making 2024 number higher than the 2025 of course because exponential distribution Vibes if it doesn't work now then there evidence it won't works next year and I could also see the case for making 2025 year higher of course projects take twice as long as one expects due to the planning fallacy so essentially he's stating that you know 15% this year 30% chance next year but of course he's saying that you know he could be completely wrong now with AGI a predictions of course it's anyone's guess but um this prediction by Ark invest essentially is a good visual media to kind of look at when you're looking at how AGI is going to progress in the next couple of years now something I do want to say about this is the exponential nature of things because they also do you know take that into account with the fact that essentially it was predicted for the end of uh you know the 20 29 or 2030 which is where many people have predicted it I'm not going to get into the next point in a moment which is really what just kind of shocked me um but essentially you can see here that every time you know as time is going down you can see that we're going down like this way as the for forecast era continues it seems that by 2027 but of course we can see that there are these huge drop offs where technology kind of just keeps on dropping off so of course it's like exponential it's kind of like S curve growth where you kind of go up and then you kind of plateau for a little bit get that next boom once there is that next thing kind of seeing like an inverted S curve um on that graph as well and I know I showed this in a previous video just wanted to show it again so you guys can visualize where things are going so the medium the most common prediction is 2029 some people are predicting next year and there are a few small reasons why but I definitely do believe that if anyone is going to get to it will be open AI obviously because of um the kind of talent that they have the kind of you know researchers that they have it's a unique team and open AI researchers are some of the most sought after Talent like you know um essentially it's so crazy that you know I think there was someone recently that was hired from uh Google that went to a different company and then Google started paying the person four times more just because AI researchers are so in demand right now because it's such a competitive space that um there is one tweet that I do I'm going to come back to again that I think you guys need to understand that if someone develops AGI I think you guys have to understand that it's going to be a Winner Takes all scenario because it's only a race until one company reach reaches AI once that happens the distance to their competitors is potentially infinite and it will be noticeable for example raising a quar trillion of the US GDP now of course some people are stating that this is

### [5:00](https://www.youtube.com/watch?v=r56S2A5QwsM&t=300s) Segment 2 (05:00 - 10:00)

where you know open aai has already achieved AGI they're just trying to raise compute because they realized that we're going to need a lot more compute for this kind of system but Others May disagree and I kind of do agree that you know potentially um they probably have internal AGI but just need more compute to actually bring the system to reality because certain things they just simply can't test because they're trying to run GPT 4 they're also trying to run some other systems like Sora they're also trying to to give some of their compute to Super alignment um so that is a thing as well now this is the statement that really did Shock Me Okay um and this is why uh I my timelines got updated because it changes everything okay so it said he says uh probably whoever controls AGI will be able to use it to get to artificial super intelligence shortly thereafter maybe in another year give it or take a year now you have to understand that AGI is essentially a robot that is apparently as good as all humans pretty much any task okay so you can think of um that can be done in a non-physical realm and AGI is going to be able to be better than 99% of humans okay according to um Deep Mind levels of AGI paper and essentially right now we have gbt 4 which is a lowlevel AGI so when we do get that AGI system it's going to accelerate everything because it means that you know we can just duplicate researchers we can duplicate mathematicians we can duplicate people um doing a whole bunch of stuff okay and essentially this is crazy because artificial super intelligence is completely Next Level an intelligence that is so smart that it will be able to just make consistent breakthroughs and it's going to fundamentally change our understanding of everything we know because it's going to be that smart okay and that is of course a problem because of course there's alignment problems and stuff like that but the problem is that artificial super intelligence is something that people didn't really even talk about because it's so it's seemingly so far away but they're sating that whoever controls AGI will be able to use it to get to ASI shortly after so if it's a true AGI like a really good one um getting to ASI won't take that long and that is a true statement and something that I didn't think about that much but it's crazy because super intelligence open I have openly stated that um super intelligence will be the most impactful technology Humanity's ever invented and could help us solve many of the world's most important problems but the vast power of super intelligence could also be very dangerous and it could lead to disempowerment of humanity or even human extension and it states that while superintelligence seems far off now we believe it could arrive this decade and that's why this is kind of shocking because open eye are saying that you know okay some people think that AGI is going to be by 2029 but they're stating that not AGI by 29 2029 we state that super intelligence could be here by the end of this decade so superintelligence could be here which means that you know if we take a look and we kind of like look at the actual you know data and we think okay what's actually going on here we could get AGI realistically by 2026 then by 2029 that's something that could happen due to the nature of exponential growth and these timelines and openai stated that themselves so that's why they're also actually working on that kind of alignment because they know that it is very soon now in addition if you want to talk predictions you have to call on Ray kwell essentially he's a futurist and he has made a lot of predictions 147 and there's an 86% win ratio I guess whatever you want to call it now of course some people have you know debated whether or not this ratio is as high as he claims but um I would say that his predictions have come true a decent amount now essentially his prediction on AGI is that artificial intelligence will achieve human levels by 2029 which is once again still going to be pretty crazy even if it does happen at 2029 because if we take a look at it because I remember Elon Musk St that by 2027 everyone's timelines is getting shorter and shorter by the day and we do know that if we take a look at what's actually going on right now um if we had AGI within two years it's something that generally wouldn't surprise everyone especially with what we saw with sa now another thing about um you know Ray kwell that he actually stated that was actually quite shocking and this is why um I don't think you guys understand the kind of world that we could be living in if we actually you do get AGI and then ASI is because um he's stating that you know there's a possibility that we might achieve immortality by the year 2030 and that's because of course we are like doing well in terms of you know longevity research and that kind of stuff but if we do have artificial super intelligence it's going to allow us to do a lot of things like a lot of breakthroughs that are just going to completely change everything and that's why this is so shocking because I didn't realize that it could only take a year I don't know I mean I think that maybe people aren't thinking about things such as you know the actual compute the actual you know laws in place that might try to regulate this kind of stuff into the ground the kind of uh maybe there's going to be some kind of I guess you could say Financial crashes or essentially other things that could potentially stop this but provided everything is smooth like there's no you know Black Swan event there's no like Bubonic plague the world doesn't need to go into a shutdown and AGI research isn't kind of delayed um ASI by the end of the decade is a pretty scary thing to think about okay and that is why I stated that this genuinely did shock me and one of the craziest things as well like I said I was going to come back to this okay how on Earth do other

### [10:00](https://www.youtube.com/watch?v=r56S2A5QwsM&t=600s) Segment 3 (10:00 - 15:00)

companies catch up like I think about this okay so let's say your opening eye okay you are working on artificial general intelligence you do it one day you wake up your researchers you know your whole team is like look we've done it we've achieved AGI we've benchmarked it on all of this it's 99% on this on that and that um we've done it we've achieved AGI boom okay how on Earth do other companies catch up because the moment you get AGI you can use it to I guess you could say get towards ASI and you know you immediately get like your company scales like 10x overnight or even 100x overnight because all you need to do is get the AGI to be able to do certain things and you know it's going to be relatively cheap to you in terms of hiring another person that you'd have to pay like a million a year with open AI you could essentially have these super powerful researchers doing tons and tons of alignment research you know ASI research and your company could get an additional 100 employees every day as long as you're scaling with compute how on Earth do other AI companies catch up to a company that's basically achieved escape velocity and I don't think they will like I genuinely don't think that other companies will catch up unless they quickly unless you know somehow it leaks and the AGI Tech is you know widely distributed and then of course you know when I say agite Tech I'm actually talking about the fact that the agite tech is going to be the research papers and the research behind it not you know opening eye giving you restrained access like they do with gbt 4 because the verion that we even get a very nerfed down model to what the raw capabilities of the models offer so essentially um you know this is some of anthropic pitch deck when they wanted to raise money in 2023 and they basically said that we believe that companies that train the best 2025 to 2026 models will be too far ahead for anyone to catch up in subsequent Cycles so um and if you don't know who anthropic are they're a big AI company that is kind of competing with openi some of the open AI researchers did leave to create anthropic because they wanted to focus on safety but the point here is that I don't think they catch up and it does make sense if you have a company that has AGI they have arguably the best technology in the last 20 years and with that they can grow their company exponentially so I don't think people catch up I think it's just you know they're going to be so far gone that it's going to be pretty crazy to see what happens and I think the reason that this is a thing is because this is why people are stating that openi have achieved AGI and they're currently using it to develop things like Sora and stuff and if that is true it kind of does make sense cuz Sora definitely blew my hat off like it's just like whoa like even as someone who looks at AI all the time when I saw that I was like well okay I didn't think we were that close but um yeah it's definitely pretty crazy and um it goes on okay so here it states Godlike Powers okay so it says probably whoever controls ASI listen to this is the craziest bit that I was reading this and I was like is this even real am I even living in a reality right now it says probably whoever controls artificial super intelligence will have access to spread to a spread of powerful skills and abilities that will be able to build and wield technologies that seem like magic to us just as modern tech would seem like magic to medievals this will probably give them Godlike Powers over whoever doesn't control ASI so that brings an important question do you think open AI let's say they have ASI they have it aligned do you think open AI are going to distribute ASI or are they just going to you know patent all the Technologies as a kind of subsidiary of open as of openi because if they have ASI and nobody else has it that's going to be the most valuable thing on the planet and if they're able to distribute cure you know new technology I mean that's going to make the company super super valuable because like it states here they're probably going to give them Godlike Powers over anyone who doesn't control ASI because that level of smartness is unfathomable like it's very hard to conceptualize how smart it is because according to several reports and you know researches and stuff like that it's basically like trying to explain economics to essentially a b like you know a b the thing that buzzes around try to explain economics to that it it's very hard to conceptualize how You' even begin to explain that to a be I mean first you'd have to teach it English then so many other different concepts and um that is going to be something that is pretty cool crazy I mean trying to even teach it abstract context so um whilst this does seem good and whilst you know Godlike powers and stuff like that and you know which is why all these companies are racing to achieve AGI because they know once that is there it's like you gain an instant 100x speed boost in this kind of race the problem is that there is the blackbox problem and a lot of people are starting to forget about this problem as we Edge closer and closer towards the edge of this huge Cliff that we could be on um is the fact that it states in general there's a lot we don't understand about modern deep learning modern AIS are trained not built SL programmed we can't theorize for example that they are generally robustly helpful and in and honest instead of just biting their time we can't check so the problem here is that um we don't know how these AI models work we actually don't know what's inside them we don't know how everything is going together it's not like you write a code and you understand exactly how the code works this is not how these air models are going to be and in the future um it's going to be a bigger problem because if we're you know growing an AI which is what some

### [15:00](https://www.youtube.com/watch?v=r56S2A5QwsM&t=900s) Segment 4 (15:00 - 20:00)

researchers have claimed which is essentially that would be a more accurate description if we're doing that how on Earth are we then going to understand really um these even super intelligent systems if we don't really understand the ones that we have now so um it's pretty crazy it's generally like I'm trying hard to put it into words but it is a very giant problem that people are trying to solve and of course um here's we have this okay so the alignment problem further currently no one knows how to control artificial super intelligence which is true and they are working on it this is what open is currently working on and it says if one of our training runs turns out to work way better than we expect we'd have a Rog artificial super Intelligence on our hands and hopefully it would have internalized enough human effects that things would be okay and that's a crazy statement I don't care what you say that is insane because he's basically saying that look if our training runs work out to be better than we expect unfortunately we're going to have a rogue ASI on our hands because we don't know how to align it they're basically saying that look if we train the next model and it's super smart or artificially super intelligent which I don't think it will be I do think that you need a ton of compute just like how you skilled things up with before it says hopefully we're just basically just hoping that it's not crazy okay um and that is quite scary that you know hope is mentioned here so it says there are some reasons to be hopeful about that but there are also some reason to be pessimistic and the literature on the topic is small and pre pragmatic which is of course true then of course we have um Sam Alman which is a great clip which you guys should take a look at because he actually talking about um the alignment problem is like we're going to make this incred be powerful system and be really bad if it doesn't do what we want or if it sort of has you know goals that are uh either in conflict with ours um many Sci-Fi movies about what happens there or goals where it just like doesn't care about us that much and so the alignment problem is how do we build a gii that does what is in the best interest of humanity how do we make sure that Humanity gets to determine the you know the future of humanity um and how do we avoid both like accidental misuse um like where something goes wrong that we didn't intend intentional misuse where like a bad person is like using an AGI for great harm even if that's what a person wants and then the kind of like you know inner alignment problems where like what if this thing just becomes a creature that views this as a threat the way that I think the self-improving systems help us is not necessarily by the nature of self-improving but like we have some ideas about how to solve the alignment problem at small scale um and we've you know been able to align open ai's biggest models better than we thought we would at this point so that's good um we have some ideas about what to do next um but we cannot honestly like look anyone in the eye and say we see out 100 years how we're going to solve this problem um but once the AI is good enough that we can ask it to like hey can you help us do alignment research um I think that's going to be a new tool in the toolbox so essentially In that clip Sam mman Actually does talk about how they're going to use AIS an internalized version of maybe an a AGI or narrow AI That's able to really understand how to align um these AI systems and of course he does talk about the fact that you know we could have an AI that just you know eventually evolves into some kind of creature that just you know does its own thing and that's pretty scary coming from someone who's the CEO of a major company that is building some of the most impactful technology that we will have in our lifetimes and essentially of course here we talk about the best plan and it says our current best plan championed by the people winning the race to AI is to use each generation of AI systems to figure out how to align and control the Next Generation and this plan might work but skepticism is warranted on many levels so open I did actually talk about um their approach to this and I think it's important to actually look at this because their goal is to build a roughly human level automated alignment researcher and then basically saying that we can then use vast amounts of compute to scale our effort and itely align super intelligence being that crazy Smart Level AI system that's going to have goals beyond our understanding and essentially they're saying to align the first automated alignment researcher we're going to need to develop a scalable Training Method validate the resulting model and stress test the entire alignment pipeline so of course they're going to do adverse serial testing where essentially they're going to test the entire pipeline by basically stating that you know they're going to try to just see what kind of goes wrong but in a kind of sandbox environment and of course try to like detect how things would go wrong so um I'm guessing that this is you know one of their approaches and of course uh they've shown that this kind of does work so essentially there's a thing called weak to strong generalization eliciting strong capabilities with weak supervision so I'm going to show you guys that page now and essentially here you can see they talk about the super intelligent problem and of course super intelligent is a big problem and this is actually pretty recent which is uh quite interesting this was December the 14th 2023 so around 2 3 months ago they said we believe super intelligence and AI vastly smarter than humans could be developed within the next 10 years however we don't know how to reliably steer and control superhuman AI systems so solving this problem is essential for ensuring that even the most advanced AI systems are beneficial to humanity just going to zoom in here we formed the Sumer alignment team earlier this year to solve this problem and today we're releasing the team's first paper which introduces a new research Direction um for officially solving superhuman models so basically they state that you know future AI systems will be capable of extremely complex and creative behaviors

### [20:00](https://www.youtube.com/watch?v=r56S2A5QwsM&t=1200s) Segment 5 (20:00 - 25:00)

that will make it hard for humans to basically look over them and watch and understand for example superhuman models may be able to write millions of lines of code potentially dangerous computer code that will be very hard for even expert humans to understand so essentially they made this kind of setup here and with this setup they say um to make progress on this core challenge we propose an analogy we can empirically study today can we use a smaller less capable model to supervise a larger more capable model so you can see here we've got traditional machine learning where we have the supervisor looking at student which is not as smart as them but it isn't too vastly smarter then we have super alignment which is um of course you know where essentially the human researcher is trying to supervise a student that is way smarter than it that's where you can see the robot that's just above this human level you can see here in this diagram and it's like you know how on Earth is that supposed to work what they're trying to do is like look okay if we can get you know a smaller robot a smaller AI system to supervise a larger AI system that's beneath human level hopefully we can scale this progress and then when we get to this level of super alignment hopefully that thing kind of works and essentially what they did was they did this they said when we supervise GPT 4 with a gpt2 level model using this method on NLP tasks the resulting model typically performed somewhere between gpt3 and GPT 3. 5 and it says we were able to recover much of GP gp4s capabilities with only much weaker supervision so it says this method is a proof of concept with important limitations for example it still doesn't work on chat TPT preference data however we also find Signs of Life with other approaches such as optimal early stopping and bootstrapping from small to intermediate to large model so essentially um this is just their first paper on kind of thinking how on Earth they could even you know try and solve this but I do think that this is something that is important now of course this is the problem okay it says for one thing there is an ongoing race to AGI with multiple Mega corporations participating and only a small fraction of their compute and labor is going towards alignment and control research and one worry is that they aren't taking this seriously enough now basically you know the slide just before there if you saw what opening I said I'm not sure on that page somewhere open I said that 20% of their overall compute is going to Safety Research which does make sense because guys if you haven't you know heard of the elephant in the room is that essentially if these uh super intelligent systems don't work out um we will die and of course you might be thinking how on Earth do we all die I could play a clip but essentially you just have to think about it like this okay um you know how ants right just you know walk around they do their thing um imagine if an ant created a human and then humans started creating highways as a result of humans creating highways uh we destroy an colonies because we need to remove their environment in order to place down a highway we need to place down homes and we just see ants a minor inconvenience and because of that um of course ants die in the process and some people are speculating that this is going to be the same with artificial intelligence and we have no idea if this is going to be true or not because the only way to find out is to do it and if we do it and we will die then I guess we're never going to really know because we're all dead so as horrible as that is the point I'm trying to make here as well is that all these companies are now placing their chips on AGI because they've realized that yo this is this next technology whoever holds this key is going to pretty much control um I think a lot of the world's resources because if you have an intelligent ASI system and you just ask it you know how do we become the most valuable company in the world it's going to get it right like I mean if it's smarter than us it's going to get it right so however long it's going to take um that's going to be an interesting thing so meta's going all in this is Mark Zuckerberg stating that you know his company's just going all in on AI hey everyone today I'm bringing meta two AI research efforts closer together to support our long-term goals building General in Ence open- sourcing it responsibly and making it available and useful to everyone in all of our daily lives it's become clearer that the next generation of services requires building full general intelligence building the best AI assistance AIS for creators a for businesses and more that needs advances in every area of AI from reasoning to planning to coding to memory and other cognitive abilities this technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can so that way everyone can bet we are building an absolutely massive amount of infrastructure um to support this by the end of this year we're going to have around 350,000 Nvidia h100s or around 600,000 h100 equivalents of compute if you include other Jeep we're currently training llama 3 and we've got an exciting road map of future models that we're going to keep training responsibly so that just shows you that all of these companies are truly just pouring billions of dollarss into this and the crazy thing is that they're making breakthroughs okay it's not just like they're doing this just fun these guys are making breakthroughs you can see that recently they made a technical breakthrough this isn't met by the way this is a company private company called Magic that could Ena enable active reasoning capabilities similar to Open Eyes qar model which was apparently a crazy breakthrough and this is why I state that timelines are getting shorter and shorter we have people stating you know crazy things you know and of course this is once again is brings us back to the mullock problem which is essentially if AGI is going to be any year now and if of course you know time lines are getting shorter because whoever controls AGI is going to

### [25:00](https://www.youtube.com/watch?v=r56S2A5QwsM&t=1500s) Segment 6 (25:00 - 27:00)

be able to get to ASI shortly thereafter we have this problem of you know Safety Research being an issue and of course you know some people even left open AI you know and the people who made anthropic you know Dario amod day who left open AI to start anthropic because he wanted to focus on safety they even recently you know did a paper on sleeper agents I might include a clip from the video where I talked about that and why that was really bad and why everyone missed the mark on that and some people were stting that oh you know Lal this is just dumb um but essentially we do have a problem on our hands because the timelines every day seem to be getting shorter and shorter whether it be an open AI employee whether it be you know a company making a private breakthrough that enables um you know active reasoning I think it's not smart to underestimate the fact that ASI will be used to get to ASI shortly thereafter and this statement okay the fact that you know whoever controls ASI will have access to a powerful skills and ability that will seem like magic to us um just like modern tech would seem like magic to medievals isn't to be underestimated because if we like think about it like this okay this is why super intelligence is so crazy like if we go back to for example you know when they just had castles and you know the medieval times or whatever if we just go back to that time and if you know we ask them how would you defeat this Army in the future okay let's say they would say oh we'd get our cannon balls we'd get our bow and arrows and we'd be able to defeat them but they wouldn't because we'd have tanks and we'd have planes and we'd have this advanced level of technology that would just simply destroy anything that they'd ever have and that's a problem with artificial super intelligence if you're trying to think of something that is very hard to conceptualize so um I mean all of the current Tech that we do have like if you saw an iPhone you brought it back a 100 years it would seem like magic like if you saw a drone it would look like magic I mean it's pretty crazy okay um and we do know that is 100 years ago without artificial super intelligence so you can imagine how crazy things are going to look like I mean I genuinely can't even imagine to believe what the future is going to look like are we going to be you know all IM Immortal and I mean how is the timeline going to be I think either one thing happens either it comes either faster than we think or later than we think I don't think it comes on time because I do think there's always certain factors that people aren't thinking about and of course who knows maybe we'll hit a wall maybe you know AGI doesn't come later down the line because we figure out that you know there's some kind of war that we can't get past and requires more years of breakthroughs and you know gbt 4 we're kind of stagnant at that for a bit but um it will be interesting to see where we do go and how these timelines do evolve because um things are moving rapidly and if you did enjoy this um it's important to subscribe to the channel because uh every day I release a video on the most important and most pressing AI news that you need to be aware of

---
*Источник: https://ekstraktznaniy.ru/video/14506*