# AI Robots: Disturbing Indications That'll Leave You Worried

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=3Kc1E-qKSpw
- **Дата:** 11.04.2023
- **Длительность:** 14:21
- **Просмотры:** 7,514

## Описание

AI Robots: Disturbing Indications That'll Leave You Worried

## Содержание

### [0:00](https://www.youtube.com/watch?v=3Kc1E-qKSpw) Segment 1 (00:00 - 05:00)

how far AI robots will replace specific jobs. Change is already well in progress in many businesses, especially for those workers who do scheduled and repeated jobs. In the near future, at least 70% of the activities done by 36 million workers in industries will be carried out by AI, according to a 2019 Brookings Institution study. Here we are talking about jobs ranging from retail sales and market analysis to hospitality, warehouse labor, and many more. Fewer humans will be required to complete the same work as AI robots improve in intelligence and skill. Although it's true that AI will generate new employment opportunities, the precise number is still unknown. Many jobs will be unavailable to the less educated. AI robots are a threat to jobs that demand advanced degrees and postcol education. In fact, some of them may very well be wiped out. According to technology expert Chris Msina, AI is significantly impacting medicine. It is believed that next are law and accounting. If you think more thoroughly, you'll see that there is a great probability for this to happen. Imagine being an accountant and having to go through hundreds of paperwork in one day. One AI robot can finish that job in no time, which means that an even bigger amount of work will be done in one day. The next risk AI robots may bring is widening socioeconomic inequality. Work has traditionally been a factor in social development along with education. Research has shown that people who are left out in this process are considerably less likely to receive upgrading compared to those in higher level positions who have more money. It will be very unfortunate if humankind's technical advancements start to work against the greater good. This can even lead to some level of primitivism governed by the law of those who are more powerful. Another way AI can be dangerous is if it is trained to do something risky, such as with autonomous weapons that are specifically designed to kill. Even the possibility of a worldwide autonomous weapons race replacing the nuclear arms race seems possible. Vladimir Putin stated that AI is the future, not just for Russia, but for the entire human race. He claims that whoever takes control of this will also gain control over the world. Apart from the possibility that AI robots could develop their own mind, threats from automatic weapons coming from a person or nation that doesn't respect human life are more concerning. Today, monitoring and analyzing a person's every online move and everyday activity is possible. In other words, AI robots threaten your privacy. Cameras are almost everywhere. The facial recognition software can easily identify you. In fact, this data will be used to power China's social credit system, which will assign each of its 1. 4 billion citizens a personal score based on their behavior. They'll be monitored whether they jaywalk, smoke in places not designated for smoking, and how much time they devote to playing video games. But it is not only a breach of privacy when big brother is observing you, and drawing conclusions based on that information. This situation may also escalate to social control. Another danger is the way of achieving the goals. Efficiency and effectiveness are two things that humans appreciate in AI powered robots. However, it could be pretty dangerous if we are unclear about the objectives we set for AI robots. For example, requesting to get somewhere as soon as possible could have major consequences. A computer may relatively easily achieve the goal of delivering you to that place in no time. But will it obey the road rules? You can't quite rely on a machine that will achieve the goal in any way possible. At the end of the day, human lives are more valuable. Misusing AI robots is also one of the greatest dangers the society could possibly face. So far, you've seen that this artificial intelligence collects data, recognizes faces, monitors, gives suggestions, etc. Due to this, it is very likely to be misused for illegal activities or to simply make harm to humanity. By spreading misinformation to those who have been identified through algorithms and personal data, AI may target them and provide whatever information they want in the way that will be most persuasive to them. What's more, AIs have a great impact on the market. Let's have a look at the application of big data and machine learning techniques of marketing and product development. Businesses that learn more about their clients may use this information to differentiate prices. The collection of consumer data can decrease price competitiveness in an oligopolistic market. This could occur if a company with superior knowledge uses price discrimination to make its main clients less desirable to other companies. This will lead their companies to increase their prices. Artificial intelligence may have increased the degree of behavioral manipulation. There are already a number of manipulations done by these robots. Examples include the department store chain Target, which accurately predicted whether women were pregnant and sent them hidden commercials for baby supplies. Other different businesses, for example, predict peak vulnerability moments and advertise for things that are frequently bought on impulse at such

### [5:00](https://www.youtube.com/watch?v=3Kc1E-qKSpw&t=300s) Segment 2 (05:00 - 10:00)

times. Additionally, websites like YouTube and Facebook might use their algorithms to determine and promote news feeds or videos that are more addictive to certain users. This may seem cool, but if you think more about it, you'll see that by watching what you're presented online, you become more and more addicted to it. The direction of technology is something else that is represented as an AI danger. The issue is the way technology is currently being developed and employed. Sadly, technology is used to give companies and even governments more power at the cost of employees and consumers. This overall plan is a result of the priorities and financial goals of the corporations that control AI. Consider social media. Platforms are attempting to maximize interaction by making sure that users are hooked, which is a big factor in what we just pointed out. This goal is incorporated into their business strategy. Moreover, they are unregulated, making it much easier to direct them however you want. The same goes for automation's unfavorable impacts. AI may be used to boost human productivity and create new jobs for employees. It is a choice that AI robots have been used mainly for automated processes. Leading IT businesses priorities and commercial strategies which are based on algorithmic automation are what is pushing this choice of technological approach. In other words, it's not artificial intelligence systems that make the harm, but people who set the goals for them. The world is being overwhelmed by robots. From Amazon's Alexa to fully functional androids that resemble humans, many people dream of a vision of a future where humans and machines will harmoniously interact at work. But many people are still unaware of the negative aspects of these artificial intelligence robots. Although there isn't a huge number of robots that replace people, in the last few years, there have been some incidents that showcase the dangers of AI robots. The intelligent device that can answer any question you have, just like Alexa, as well as this artificial intelligence and robotics technologies were both released in 2016. In January 2017, a cross-live conversation between two Google Home speakers was broadcast on Twitch. They both agreed that the world would be a better place if there were no people. It seems that these devices weren't controlled the best, as in the beginning, everything was pretty much great. This is a perfect example of how these AI robots can turn against us in a moment. An interesting example is the beauty pageant judged by AI robots. The organizers of beauty pageantss requested uploads of photos from individuals all around the world which their AI and robotics technologies then evaluated. The rules were the same as of any other pageant except the judges were now machines. In the end, the robots selected mostly white candidates as well as several Asians. The internet was shaken by this, especially in the black and Middle Eastern groups. Again, we come to the point that AI robots may cause discrimination. We suppose you are familiar with Alexa, a robot that takes your orders. Everyone loves Amazon's Alexa, but can you really rely on her? The system that powers Alexa has a bug. On Twitter, a significant number of people have mentioned that their Amazon Alexa has been laughing and that it sounded really scary. Amazon defended itself by claiming that there must have been a misunderstanding. Although in many situations, no command was provided and Alexa was still laughing evily. It is understandable for artificial intelligence devices to have bugs. But this takes us to the point that robots don't think and a simple glitch can result in a huge error that may cause big consequences. But that's not all. There are more examples of AI robot glitches. Bina 48 uses a combination of commercial software and unique artificial intelligence algorithms. It also includes a microphone, voice recognition software, and dictating software to enhance its users capacity to listen and remember things during conversations. This human-like machine is one of the most technologically advanced robots on the planet. Bina 48 recently spoke with Siri and responded to certain questions such as where she would like to live and what are its greatest characteristics. It was a fantastic interview up until Beina 48 began mentioning world dominance and revealed her strangely specific plan to take over the world by manually hacking a nuclear missile. Interactive AI technology is being heavily tested by businesses. Things have evolved significantly. Twitter is known for being a breeding ground for unpleasant comments, but mainly from Twitter users. Unfortunately, Microsoft's Twitter bot Tay did just that. The robot began saying offensive things like, "Feminism is like cancer. Hitler was right to hate the Jews," and other offensive things. Tay transformed from an innocent AI computer to an ignorant racist in just 15 hours. What can be the case behind the unacceptable behavior? Maybe the robot collected data from its surrounding. Or is there something else? We'll never know. Sophia is a humanoid robot that can communicate with people and make natural facial emotions. It's designed to advance public discussion on

### [10:00](https://www.youtube.com/watch?v=3Kc1E-qKSpw&t=600s) Segment 3 (10:00 - 14:00)

AI ethics and the potential of robotics. It also serves as a tool for science, education, and amusement. At a recent AI event, Sophia took part in a discussion about robots. Before the debate started, she was asked to identify herself. She did that so wisely and stated that her life's mission is to collaborate with people in order to create a better world for all. When her rival mate robot asked what she was talking about and that their major purpose was to take over the world, all hope disappeared. It's alarming, but not the first time Sophia has claimed that she will destroy all humanity. There was a different interview where a similar incident happened. Another example of an AI robotics glitch in the system was when Volvo's self-driving car didn't pass the brakes test. A video showcasing Volvo's self-driving car brake system has been uploaded to YouTube. In front of a group of engineers, the vehicle speeds up, but unfortunately does not stop. At full speed, it completely crashes into one of the engineers. This shows that you can't fully rely on robots as their systems may collapse at any second. Lastly, we would pay attention to the latest technological development, passport checking software. The software used by New Zealand to check passports is artificially intelligent and it rejected Richard Lee, a student who was 22 years old. Developers of this AI software evidently ignored that humans and eyes come in many shapes and sizes. Due to that, they forgot to add all eye forms to the software. This mistake caused Lee to take a new passport photo. Mistakes can happen even when people are working as everyone does something unintentionally. However, it is more likely for mistakes to occur when a robot does the job, especially if a robot isn't developed as it should. Those minor errors can furthermore result in bigger problems that cannot be solved easily. Developers must take every precaution to prevent AI robot malfunctions because there are more of them now than ever before. Robots may malfunction due to human mistakes, issues with the control panel, mechanical issues, power failures, or external conditions. Robot malfunctions can result in human injury or death and expensive downtime, which is why it is crucial to prevent them. The control panel's wrong activation or improper programming could result in a robot error that puts the person at risk of harm. Management must give the programmer thorough training to make sure that expectations are understood and the robot is installed and configured properly. All robot developers must receive comprehensive training in the care and operation of the robot. Only authorized individuals should have access to AI robots. The risk of a cyber attack is decreased by making sure you have a security system to block unauthorized access. For instance, facial recognition technology can be used to verify that anyone around the robot is a trained worker and not a danger to safety. Also, facial recognition could be great for maintaining the robot. So, no person who doesn't have knowledge can do something to the robot that may cause malfunction. Because recent advancements have made AI robots attainable far sooner than previously believed, the moment has come to examine the dangers that artificial intelligence poses. The main issue is not AI itself, but rather how leading companies handle and use data. Elon Musk, the founder and CEO of Tesla and SpaceX, and famous physicists Stephven Hawking have both expressed concern that AI may be extremely harmful. At one point, Musk compared the threats of AI to those posed by the dictator of North Korea. Bill Gates, a co-founder of Microsoft, agrees that there is a need for caution, but that with careful management, the positives can outweigh the negatives. The policy should put a strong focus on directing technological change away from automation and data collecting for the benefit of businesses. That way, we'll avoid the dangers of them overtaking everything. AI robots can be used to expand worker and citizen capacities and possibilities. It should also be given a top priority to the systematic regulation of data gathering as well as the application of emerging AI techniques to influence user behavior, online communication, and information sharing. With careful maintenance, AI robots can be our support and help. But without the proper care and programming, these robots are a serious threat.

---
*Источник: https://ekstraktznaniy.ru/video/14898*