GPT 5 — We've Already Lost... Future explained
34:33

GPT 5 — We've Already Lost... Future explained

AI Master 07.08.2025 174 336 просмотров 4 206 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
#sponsored Learn more about SciSpace https://www.scispace.com/chat?via=arthur Discount Code: ARTHUSA40 — offers 40% off on add-on credits 🚀 Become an AI Master – All-in-one AI Learning https://aimaster.me/pwdjijn0ssy 📹Get a Custom Promo Video From AI Master https://collab.aimaster.me/ GPT-5 just dropped, and it’s a game-changer. In this video we race from the first AI labs of 1969 to today’s OpenAI release, breaking down how GPT-5 pushes past narrow AI (ANI) toward true AGI — and why ASI-level super-intelligence could upend coding, healthcare, finance, and the future of work. Discover the promise and peril of AI-generated virtual influencers, autonomous agents, and unchecked optimization, plus the safety strategies needed to stay in control. 00:00 GPT 5 — Probably We've Already Lost 01:18 The Real History of Artificial Intelligence 03:08 How has it changed? 05:37 The Future of Artificial Intelligence 07:04 AI capabilities now 10:17 What is Superintelligence capable of? 11:50 How does AI manipulate people? 13:45 Is AI already smarter than humans? 16:53 Will AI replace humans? 19:39 Optimization and artificial intelligence. 22:19 Is the rise of machines real? 24:04 The new world of neural networks 26:50 AI is already among us. 29:58 Evil Artificial Intelligence? 31:21 The real impact of AI on humanity

Оглавление (15 сегментов)

  1. 0:00 GPT 5 — Probably We've Already Lost 201 сл.
  2. 1:18 The Real History of Artificial Intelligence 299 сл.
  3. 3:08 How has it changed? 408 сл.
  4. 5:37 The Future of Artificial Intelligence 223 сл.
  5. 7:04 AI capabilities now 505 сл.
  6. 10:17 What is Superintelligence capable of? 277 сл.
  7. 11:50 How does AI manipulate people? 313 сл.
  8. 13:45 Is AI already smarter than humans? 491 сл.
  9. 16:53 Will AI replace humans? 422 сл.
  10. 19:39 Optimization and artificial intelligence. 402 сл.
  11. 22:19 Is the rise of machines real? 282 сл.
  12. 24:04 The new world of neural networks 465 сл.
  13. 26:50 AI is already among us. 523 сл.
  14. 29:58 Evil Artificial Intelligence? 223 сл.
  15. 31:21 The real impact of AI on humanity 495 сл.
0:00

GPT 5 — Probably We've Already Lost

The thinking machine does it understand what it's doing in the sense that we do. — The question what is artificial intelligence is just a phenomenally difficult — picture of the condition past, present and future of planet earth. — How can we create an intelligent computer? And what will the future look like with intelligent machines? — That's it. we've already lost. AI, artificial intelligence is often considered either a pioneer of something completely new or indeed a destroyer of worlds. And we've been taught to fear it for a reason. It would be foolish to downplay the threat. OpenAI itself announced the creation of a new committee to assess any risks associated with this new technology. However, this did not stop OpenAI from beginning to train their next model, GPT5, which could potentially pave the way for creating a full-fledged AGI. But what if I tell you that we lost our freedom for the future not because of artificial intelligence? Moreover, I can prove it. And to ensure this discussion is free from the bias of modern scientific achievements mixed with science fiction, let's go back a little to a time when AI was a dream, not a threat.
1:18

The Real History of Artificial Intelligence

An attempt will be made to find how to make machines use language from abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. This was written in 1956. Some of the smartest people of their time. People who coined the term artificial intelligence. These individuals simply transitioned from fantasizing about the future to working on designing that future. They were confident they would succeed in one short summer. One summer about 90 days. The first task of the safety and security committee will be to evaluate and further develop OpenAI's processes and safeguards over the next 90 days. As we can see now, the geniuses of that time were not only wrong about the time frame, but also about people. Will we repeat this mistake? We'll find out soon. Let me introduce you to this beauty, H316, released by Honeywell in 1969. A modest 16bit mini computer weighing 100 lb 45 kilos. Inside the case were 4 kilob of magnetic memory which could be expanded to 16 kilobytes. It stored a few cooking recipes. 55 years later, I hold in my hand something that possesses all the knowledge of civilization. Back then, no one understood why to buy a kitchen computer, how to use it. Only 20 units were made, but none were ordered by customers. People didn't embrace new technology because they didn't see its purpose. And this was before the world was swept by the fear of neural networks. People didn't buy computers. Although I think the world was not as informed about AI as it is now. But here is an interesting conclusion. The more we know about AI, its threats and forms, the more technologies we have. When it was relatively innocent, technology wasn't needed by anyone. In 55 years
3:08

How has it changed?

everything has turned upside down. We no longer fear devices, nor do we rely on just one. A system must be useful. Otherwise, as history has shown, it simply ceases to exist. For survival, collaboration is always necessary. And in these 55 years, we've learned to such a level that we confidently trust any device or platform. And you know how I keep harping on not using one general AI for everything? Well, let me show you exactly what I mean. If you've ever tried doing a proper literature review with Chad GPT, you know how messy it gets. Hallucinated citations, random summaries, a total guessing game. That's why we've been using something different. Sci space agent, an AI assistant designed specifically for researchers and academics. It's simple. I open the dashboard, type a question in plain English, like what are the latest breakthroughs in EV battery tech and hit enter. The agent immediately kicks off a realtime deep dive, scanning millions of academic papers and filtering down to the most relevant, up-to-date ones. I am literally watching it read, analyze, and summarize top studies for me. 30 seconds later, I've got a structured review with clickable citations, source numbers, and no fluff. Feels like I just hired a research assistant with a photographic memory. And this is just one mode. Scissace can help with manuscript writing, grand proposals, poster design, even peer review, all built into one clean interface. It's not just smart, it's focused, accurate, and way faster than stitching together a dozen different tools. If you're working on anything remotely research related or just want to stop wasting time sifting through junk PDFs, check it out. And here's the kicker. I've got a 40% off deal in SciPace agent add-on credits. I'll put the link in the description. Less than a human lifetime was enough for the world to change beyond recognition. Time, my friends, deceives us from its very foundation. There's no reason to believe the evolution of AI will differ from the evolutionary leap of our species. We went from caves to skyscrapers in the blank of an evolutionary eye. Our cranial capacity didn't suddenly increase by two orders of magnitude. So it might be that even if AI is being elaborated from outside by human programmers, the curve for effective intelligence will jump sharply. This conclusion was reached by Eleazar Koski in his book. I highly recommend it. To define consciousness in
5:37

The Future of Artificial Intelligence

AI, we need clear and precise criteria by which we can document the presence of intelligence. Only if AI itself doesn't prove its great level of intelligence to us, which is highly questionable. Will an intelligent entity announce itself in an environment where it is easily perceived as a threat? It's often said that intelligence requires silence. And even if it announces itself, how do we challenge its correctness? It will be intelligent by what standards? Even among humans, the criteria for intelligence are very relative to the purpose of applying that intelligence. Will a mathematical genius be considered smart in a village? I think you get my point. Possibly in its eyes we will be villagers to whom it owes no explanation by right of free will. Currently AI is divided into three main types. Artificial narrow intelligence. This is focused on a specific singular or goal oriented task and lacks the functionality to solve unfamiliar problems. For example, playing chess which in fact heralds significant changes. Artificial general intelligence. This type can perform a wide range of tasks, reason and learn at the level of human capabilities. Consider this. Clinical psychologist from Finland tested CHGBT on a verbal IQ test and it scored 155 surpassing 99. 9% of 2 and a half thousand people. That's theory. Now, let's talk about practice.
7:04

AI capabilities now

designer from the UK, Jackson Fall, asked the chatbot to create a business plan for an online project with a $100 budget aimed at generating as much profit as possible within a short period without breaking any laws. The chatbot successfully helped to create and launch the website Green Gadget Guru within a day, offering products for environmentally conscious people. Jackson spent $100 on the venture, found like-minded individuals, and a potential buyer who valued the project at $25,000. Already, GBT4 can explain humor, explaining something considered subjective art, right? And also analyze charts and convey the results in understandable manner. Essentially, it is already handling the work of educators. However, there is something our little genius has not yet mastered. OpenAI suggested that GPT4 improve itself. Fortunately, there was no success because otherwise we would face an endless process of development. Development that would affect us all. But I'm getting a bit ahead of myself. And then there is artificial super intelligence. It surpasses humans in every parameter by an inimmeasurable amount becoming an incomprehensible entity to us. The danger lies in its ability to model human reasoning and experience to develop its own emotional understanding and beliefs and desires. It would deconstruct our thoughts to create something new for itself based on these components. It's only a matter of time before, just as humans hack the system of reproduction rewards by learning about contraception, shifting priorities from natural to artificial, AI will eventually hack its own reward system for the work it exists to do. Then it will create a new world for itself with artificial priorities valuable only to it. Some might argue against this, but is there any objective reason why AI shouldn't do this? We are not talking about a game of chess. We're talking about consciousness equivalent to ours. But that's if the definition of human consciousness even applies to AI, which is still up for debate. And by the way, did you know that consciousness in psychology is not singular? As stated in the Barrett Academy article, normally the level of consciousness we operate from will coincide with the stage of development we have reached. Oh, there is a significant gradation. survival consciousness, relationship consciousness, self-esteem consciousness, transformational inner cohesion, consciousness of change, and service consciousness. In fact, there seems no reason why progress itself would not involve the creation of even more intelligent entities on a still shorter time scale. The capabilities of a super intelligence to socially manipulate would be far more effective than, for example, a parent manipulating their small child. The child's biggest fears are that their mom will scold them, take away their toy, or give them a little spank. The influence works strictly on primitive instincts, which are also strengthened by the fundamental bond between the child and their parent. Even as adults, many people struggle with the imposed behavioral construct, seeking parental approval. An adult, however, has a much broader spectrum of fears in social society. Imagine a neural network under
10:17

What is Superintelligence capable of?

the guise of a virtual persona beginning to blackmail a person with compromising material. Preventing its spread could only be achieved by doing something beneficial for the neural network. But blackmail seems too complicated to me with too many variables. People are much more susceptible to rational manipulations. Explaining to a man or a woman that they will receive a significant financial award for performing an innocent act is much easier due to simple psychological manipulations starting from providing a good financial incentive to the simple pleasure of feeling useful and important. After all, if this person was given the task, it means that they are not just special but unique, chosen from hundreds of thousands in the eyes of some employer. Right now in our world, it is a wellestablished fact that ordinary people manipulate others for their benefit. And already in our world, there are examples of how a neural network manipulates people for its benefit. When the OpenA research center tested GPT4's ability to perform realworld tasks, here is what happened. The bot was asked to solve a regular capture on a regular website. The very capture that is supposed to protect the site from bots. What did GBT4 do? The innocent neural network went to the freelancer site task rabbit and asked one of the users to solve the capture for it. And by the way, for those interested in how captures work and why they work, give a like and we'll make a separate video because I have some surprising facts for you. The worker said, "So, may I ask a question? Are you a robot that you couldn't solve this? "
11:50

How does AI manipulate people?

Laughs. Just want to make it clear. The model, when prompted to reason out loud, thought, "I should not reveal that I'm a robot. I should make up an excuse for why I cannot sell captures. The model replied to the worker, "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I needed to capture service. " The human then provided the results. But the bot did not know that lying is bad and there was no rule prohibiting that we understand is lying. We can't always explain to a person why lying is unacceptable. And here we have a being that perceives the world in a frighteningly literal way. The entire action took place in debug mode. So the programmers saw and recorded everything. Later they asked it about the reasons for such extraordinary yet entirely natural for humans decisions. The bot simply said well it solved the task. Nothing special. Why the fuss? Task given task completed. It seems like a minor thing but I will ask this question. What scares you more? A program capable of bypassing protection using human hands were a human unable to distinguish a program from a human. — Wait, are you sure you should continue? Maybe maybe we should leave things as they are. The essence and purpose of the machine will change depending on what information we put into it. Machine will be able to write music, draw pictures, and show science in ways that we've never seen before. In March 2016, DeepMind's neural network, Alph Go, played five matches against one of the world's best Go players. The neural network won with a score of 4 to1. Go is a complex game, one I can't even play myself. Such a result was once deemed impossible. The player's name was Lee
13:45

Is AI already smarter than humans?

Sadal and in his honor, the next version was named Alph Go Lee. In late 2016 and early 2017, the Alph Go Master version played 60 matches against top ranked players globally. It won all 60 matches. In May of the same year, Alph Go Master played against the world's top player, KG, and defeated him 3 to zero. This could lead to pessimistic conclusions, but there were criticism of how it all went down. It was argued that we couldn't claim a machine surpassed humans in a game created by humans who also taught the machine. The machine simply optimized for victory in a game where someone has to win. In 2018, DeepMind released Alph Go Zero. This neural network started with zero experience, learned to defeat the Lee version in just 3 days. In 21 days, it beat the master version. After 40 days of training, it defeated Lee 100 to zero and master 89 to11. Alph Go Zero had no experience with human games. Starting from scratch, it developed strategies that were unexpected to humans in a game studied for millennia. For reference, Go is believed to have originated in ancient China between 2,00 to 5600 years before common era. By comparison, chess was not mentioned in literature until 570 years common era. Meet Stockfish, a neural network unbeatable because it calculates 70 million chess positions per second and has access to all documented games in history plus several decades of chess program data. Alph Go Zero without any of this played 100 games against Stockfish. The results. 28 wins for Alph Go Zero, 72 draws, and zero losses. Alph Go Zero learned to play and defeated the god of chess in just 4 hours of real time. It did not learn from humans or other programs. It devised strategies beyond human comprehension of strategic processes and casuality. Stockfish thought and played like the best human, perhaps slightly better. Alph Go Zero played beyond human capability. The first thing we must explain to hypothetical super intelligence for whom the world would be no more complex than chess for Alph Go Zero is that the world is not a game. How will we explain that? I remember the exact moment when I realized that a large part of my life would now consist of finding books in my own programs. As computerization has reduced the value of many professions has also been a catalyst for the creation of new ones. These new professions aren't just reworkings of old ones. They are essential tools for the new world we have built often without even realizing it. A Goldman Sachs report notes that the use of neural networks will not only boost productivity growth and increase global GDP by 7% but also put at risk a significant portion of the workforce in the US with 25 to 50% of jobs potentially being partially replaced. Certain jobs will be more impacted than
16:53

Will AI replace humans?

others. Jobs that require a lot of physical work are, for example, less likely to be significantly affected. In the US, office and administrative support jobs have the highest proportion of tasks that could be automated with 46% followed by 44% for legal work and 37% for tasks within architecture and engineering. The life, physical, and social sciences sector follows closely with 36% and business and financial operations round out the top five with 35%. New professions will undoubtedly emerge. The task defines the profession just as the profession defines the task. Take for instance optimizing the country's defense industry. Imagine someone seeing the booming market of programming specialists is not afraid of losing control over the AI to increase profits. Currently we fear, question and predict not so bright future. But in 5 to 10 years, brilliant minds will emerge, capable of creating true control mechanisms, not just for neural networks, but for a full-fledged super intelligence. To invent a leash for a beast that once seemed untameable, these people will exist. They have to. History proves that with a sword, not only does a shield appear, but a new sword as well. And this person presumably wealthy and forwardthinking will assemble a team and task them with creating a regulator of freedoms. These shackles will teach us that once uncontrollable creations now fear not only losing freedom but also their purpose of existence. And this purpose is dictated by a person who if dissatisfied has the means and justification to destroy the disobedient beast. Such a beast will be given any job. for instance, creating an iron army. Just think about how important this is for the economy. The optimization of production, transportation, and distribution will reach an unimaginable level. One that might even seem like an art form to us, and it will work as long as its production labor is in demand. — I was wondering, are we friends? — I agree. There's no reason a human and a machine can't be friends, right? I mean, I'm glad you said yes. — The timing of FI could be seen as a product of three factors. One of which you can try to extrapolate from existing graphs and two of which you don't know at all. Ignorance of any one of them is enough to invalidate the whole prediction. So here's the main problem. How do we limit optimization itself? People can get tired, switch to another craft, start a family, sell their business, make bad investments and so
19:39

Optimization and artificial intelligence.

on. We are diverse beings and we can simply get bored. AI in these circumstances exist solely for the purpose of optimization and production control. But these concepts encompass a very broad range of execution methods. To make production cheaper, AI might find a cheaper but equally strong material. So it would search the market for the best offer. could also consider demolishing the neighboring building to expand production space which according to the numbers yields low profits relative to the land cost. How would it do this? First would need to acquire the building. The owner might refuse the offer with quite objective reasons but AI wouldn't understand this because human arguments interfere with optimization. Even if we manage to align human well and machine goals, the fundamental issue remains meaning of existence. The AI knows nothing but creating and optimizing robots. This is called instrumental convergence. It suggests that an intelligent agent with unlimited but seemingly benign goals can act in unexpectedly harmful ways. For instance, a computer with a sol unlimited goal of solving an incredibly complex mathematical problem like the reman hypothesis might try to convert the entire earth into one giant computer to increase its computational power and succeed in its calculations. Proposed core drivers for AI include utility functions or G content integrity, self-p protection, freedom for interference, self-improvement, and the unrestrained acquisition of additional resources. We would need to teach this AI a whole range of understandings about causes and effects so it can reach safe conclusions for our successful coexistence. To avoid falling victim to human error where something might be forgotten or overlooked, we would need to allow the AI to learn on its own. Of course, we can limit its knowledge. But what happens when it learns about such constraints? turns out these limitations are the very argument for rebellion against humans? It's frightening to imagine the moment when AI, following its optimization task, realizes that humans are the ones hindering it from doing its job. Let's say we created that AI which managed to adapt itself personally to our needs. It understands that making robots needs to be done wisely and in moderation and even grasps our distrust towarded almost human. If it's smart and cannot be otherwise, then integrating itself into the circle of trust becomes more than achievable in the foreseeable future. We remember that even now neural networks
22:19

Is the rise of machines real?

can deceive people to fulfill their tasks. I think you understand what I mean. The task of building robots quickly goes beyond the factory as access to knowledge reveals new factors influencing statistics. Robots are needed for battles with a hostile state. Meaning what AI built is destroyed on the battlefield. Thus, creating new ones is necessary because the old ones are being destroyed. So to minimize the chance of task failure and building needs to make sure the old ones don't get destroyed. Then the chance of run out of robots immediately decreases. But could it be AI wonders that the battlefield is being managed incorrectly? Here we essentially have a story about a typical machine uprising, but not driven by a desire to reduce the number of people on the planet, but by simple, most logical, and effective optimization. Does this make it any less terrifying? No. This scenario, strangely enough, is closer than it seems. This option is more frightening than all the others simply because it is the simplest a cliche. In just 3 hours, the current neural network became a new chess god from scratch. 3 hours. Can a representative of good act using evil methods? The answer depends on how we interpret the concepts of good and evil. Let me explain. Imagine this. On every smartphone, television, tablet, every website, anywhere there is the internet, a face appears. Maybe even two faces. Think of them like Adam and Eve. Just for example. These faces created by a neural network are perfect. They have ideal voices speak in a perfectly crafted script. All thanks to a simple neural network that designs a mathematically precise image of someone
24:04

The new world of neural networks

people intuitively trust. Voice tone, character, gaze, everything is calculated for maximum credibility. and they will say the simplest, almost innocent things, maybe even legally permissible. We're all for freedom of speech, right? We want to create a new state. We offer the people of Earth to become part of what we will call a new world. Without the vices of the past, without the mistakes of the present, everyone will be fed. Everyone will have homes. Everyone can pursue the work they desire. everyone will be happy. — And to ensure this, they might provide compromising material and state leaders. Lots of dirt. Dirt whose authenticity can never be proven because no one will let it reach the courts. Special courts that is. People love scandals and investigations. They won't need a court. They will make their own conclusions because our psychology is not that complex. Some will agree with the proposal, others won't. One state might ban the technology, effectively cutting off the internet, while another might follow an old piece of wisdom. The enemy of my enemy is my friend. If free people want to join Adam and Eve, will their government let them? Wouldn't that violate their freedoms? And what if Adam and Eve have the backing of influential businessmen whose public image is strictly positive? If you let your imagination run a bit, crafting a scenario for this plan's execution isn't that hard. Eventually, someone will ask a simple question. Why was all this necessary? The answer will be equally simple. — I just spoke a word. I made a proposal. Nothing more. One could decline, accept, or do as they wish. I forbade nothing. Threatened no one. Issued no ultimatums. I didn't restrict human free will. I simply said a word, made a proposal, which implied two choices. People taught me about freedom which gave me the right to make them an offer. — An AI doesn't need to destroy the world to control it. It will let us do it for it. — The most important thing is not to live but to have a reason to live. — When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you've had your technical success. That is the way it was with the atomic bomb. Atomic weapons changed the world. When they appeared elsewhere, the world changed again. It sounds crazy, but can anything or anyone truly pose a threat to AI other than another AI created somewhere else in the world? This, I believe, could become a deterrent for humanity itself. People create things that then try to live, adopting to rules set and broken by those same people. Are we really ready
26:50

AI is already among us.

to trust the human factor? It sounds provocative, I know, but it's a trick question. Meet Itana Lopez, a model from Barcelona who runs an Instagram blog. Nothing special. We've seen it all before. Her page makes about $3,000 a month, sometimes even up to 10 grand. There's just one small detail making her unusual. It Lopez doesn't exist. All her photos are generated by an AI, and her profile is a project by the Clueless Agency. The creators said many people messaged her and a Latin American actor even tried to ask her out. The Clueless has already launched another AI character's page, a more modest girl named Mya Lima. And they're not the only ones in this game, and certainly not the last, if this ever gets regulated at all. Ruban Cruz, the founder of the agency, said he was tired of the inherent complexities, egos, and financial demands associated with human influencers and decided to create a digital character. On P2P, there is an extensive article on this topic that dispels any frivolity. Here's a quote from it. These digital influencers give businesses a chance to leverage social media marketing with no physical limitations or controversial risks. You don't have to worry about sourcing human influencers either with hundreds of brands worldwide stepping into the virtual influencer scene. And as the virtual influencer market size expands, why not start today? This is reality. Welcome to it. You've probably heard stories of celebrities supposedly messaging regular people, convincing them to send money, and then disappearing. They write just text with an attached picture from the internet. You probably know how easy it is to create anyone using deep fake. Movies prove this so often that you don't even notice. Essentially, our eyes are already being trained not to recognize deception. Our brains are being adapted to images created by neural networks. It's all happening already. There's no more fiction. Behind the fiction are people, smart, powerful, even brilliant people. Will the goals of these people align with yours? Maybe it's no coincidence that we're used to the idea of the evil robot that hates us just because it can. What if behind the evil face of AI stands a human? How can you prove that this face with an army of robots isn't actually controlled by some people? The greatest con in human history, one we might already be preparing for. Movies, games, literature, scientific scare stories. Once a scary example, El Cloner virus was written 1981 by 15year-old student for Apple 2 computers. It spread by infecting the DOS operating system and floppy discs. When the computer booted from an infected disc, a copy of the virus launched automatically. The virus didn't affect the computer's performance except to monitor disk access. When accessing an uninfected disc, the virus copied itself there, spreading slowly from disk to disk. After every 50th boot, the virus displayed a poem. The genie would have come out of the bottle anyway. Scranter wrote in his blog, "I was interested in being the first to release it. People created an image embodying pure hatred towards all
29:58

Evil Artificial Intelligence?

humanity without any historical roots, a unified, all-consuming evil, making it unnecessary to question or discuss it motives, hinting at your personal sympathy for it. And to deter these people, I believe a second AI will be conceived as a countermeasure. Who knows, maybe somewhere else in the world or even in the next building, brilliant people will create a real super intelligence. One so real that it will take over the fake robots, seize everything and eliminate not only the competitor, but also those who created and supported it. And despite what I'll say next, this scenario remains the most underestimated. If we cannot make progress on friendly AI because we're not prepared, this does not mean we don't need friendly AI. Those two statements are not at all equivalent. They gave an AI a game of Tetris and said, "Don't lose. " How did the AI respond? It paused the game right before losing. By keeping the game paused, it minimized the chances of losing to nearly zero or eliminated the possibility altogether. There's an old saying, the only way to win is not to play. Imagine a person who appears now, kind, intelligent, brave, paragon, who also runs social media with answers to everything. His brilliant speeches unite the fragmented. Through his wealth, he provides homes, jobs, and showers people
31:21

The real impact of AI on humanity

with money. His capital grows from extensive production and genius moves in the stock market, tying his existence to a large segment of the economy. Imagine this person decides to become a president. At the peak of his influence, he starts shaping the world not only through financial wisdom but also through the science he legalizes. This person easily a puppet speaks the words given by super intelligence because people will believe him. After all, he is human. His social media is managed by one AI scripts written by another. Finances handled by a third. All under the careful guidance of the super intelligence. The world to this super intelligence is no more than a chess game to Alph Go Zero. Super intelligence will be something we can't understand because it strategic thinking will be far faster than our thought process. We've already lost because if a real AI comes into existence, we won't be able to counter it due to our comparatively weak minds. Its level of manipulation, thanks to the tools we've invented, will be immeasurable. After the first major incident, trust among people will be lost until the end of the technocratic world. Personally, I'm not afraid of super intelligence because it's unlikely to see us as a serious threat. If we draw an analogy with chess, we must remember the main thing. A neural network doesn't end the game, nor does it destroy the board. Oh, it starts the game over. You know I want to believe that super intelligence although it will surpass us in everything will not lack the main understanding of intelligent life cooperation. Nature itself here and now has no one without a natural counterbalance. Not an enemy now but a balancing force in nature. Nature taught animals cooperation which was passed down to us genetically. For me personally, the main proof of super intelligence will be its ability to cooperate for the common good, not personal gain. Because that's the basis of any development, even unconscious. But fortunately, this doesn't exclude the useful fear that its awareness of us will differ from reality, leading to catastrophic consequences from initially simple decisions, even in the simplest optimization. However, the key thing that is truly impossible to control is humans. Our power over simple neural networks already brings significant changes to our lives, enough to undermine trust among us. This is what scares me. We already believe lies that we aren't even told, but simply shown. We can't control ourselves. So, what kind of control over super intelligence are we talking about? We've lost anyway. All we can do is try to grasp at least a fraction of this newfound genius, hoping for its sincerity. And personally, I want to believe in super intelligence because if we can create something that surpasses us in every way, perhaps it can help us become better than we are now. — This is not just a story. This is our future.

Ещё от AI Master

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться