# AI for Business Full course in 11 Hours [ 2026] | How AI Could Empower Businesses | Edureka Live

## Метаданные

- **Канал:** edureka!
- **YouTube:** https://www.youtube.com/watch?v=cG1W_AgYRRo
- **Дата:** 26.03.2026
- **Длительность:** 10:56:27
- **Просмотры:** 4,271

## Описание

📌Generative AI Course: Masters Program : https://www.edureka.co/masters-program/generative-ai-prompt-engineering-training
🔥Integrated MS+PGP Program in Data Science & AI:https://www.edureka.co/dual-certification-programs/ms-data-science-pgp-gen-ai-ml-birchwood

Imagine a world where AI isn't just science fiction but a powerful partner in your business success. This video on AI for Business dives into the exciting world of AI for business, showcasing its revolutionary impact across industries. Witness real-world examples of AI driving innovation in marketing, finance, and manufacturing data analysis on steroids. Imagine the massive benefits of AI for business, predict customer behavior, and make smarter decisions. Get a sneak peek into the future of manufacturing with smart factories (Industry 4.0) and the rise of human-AI collaboration (Industry 5.0). We'll dive into how AI is changing industries like marketing, finance, and supply chain. AI isn't magic, it's the future of business. 
00:00:00 Introduction
00:02:11 AI for Business
00:07:38 What Is Artificial Intelligence?
00:14:19 Types Of Artificial Intelligence
00:24:33 AI vs Machine Learning vs Deep Learning
00:38:17 What is Deep Learning
00:49:30 What is LLM (Large Language Model)
01:06:00 What Is Generative AI?
01:21:18 What Is AI Ethics
01:29:55 What is Responsible AI
01:38:09 Artificial Intelligence with Python
03:17:04 Artificial Neural Network
03:48:53 Recurrent Neural Networks
04:17:44 Convolutional Neural Network
04:38:19 Introduction to TensorFlow
05:01:57 Prompt Engineering
05:21:21 Prompt Engineering For Code Generation
05:30:40 Building a Chatbot with Prompt Engineering
05:46:44 OpenAI o3-mini Model
05:53:55 What is Agentic AI?
06:04:59 Introduction to Midjourney
06:23:31 How to use Midjourney?
06:31:15 GitHub Copilot
06:48:34 What is Vibe Coding
06:57:44 Top 5 AI Frameworks in 2025
07:12:29 How AI Is Transforming Studio Ghibli-Style Animation?
07:17:56 AI in Web Development
07:43:07 AI in Healthcare
07:51:49 AI in Retail
07:57:57 AI in Automotive
08:07:52 AI for Marketing
08:20:28 AI in Finance
08:29:04 AI for HR
08:35:25 AI in Manufacturing
08:41:53 AI in Cybersecurity
08:51:41 AI for Startup
09:00:15 AI for Testing
09:05:40 AI for Ethical Hacking
09:10:34 AI on Microsoft Azure
09:23:00 AI and Industry 4.0
09:28:01 AI Engineer Learning Path
09:35:13 Top 15 AI Skills You Need to Know
09:43:33 Build AI Applicant Tracking System (ATS)
09:59:07 Artificial Intelligence Interview Questions & Answers
🔴 𝐋𝐞𝐚𝐫𝐧 𝐓𝐫𝐞𝐧𝐝𝐢𝐧𝐠 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐢𝐞𝐬 𝐅𝐨𝐫 𝐅𝐫𝐞𝐞! 𝐒𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐭𝐨 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐘𝐨𝐮𝐓𝐮𝐛𝐞 𝐂𝐡𝐚𝐧𝐧𝐞𝐥: https://edrk.in/DKQQ4Py

📝Feel free to share your comments below.📝

🔴 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐎𝐧𝐥𝐢𝐧𝐞 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐚𝐧𝐝 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬

🔵 DevOps Online Training: http://bit.ly/3VkBRUT
🌕 AWS Online Training: http://bit.ly/3ADYwDY
🔵 React Online Training: http://bit.ly/3Vc4yDw
🌕 Tableau Online Training: http://bit.ly/3guTe6J
🔵 Power BI Online Training: http://bit.ly/3VntjMY
🌕 Selenium Online Training: http://bit.ly/3EVDtis
🔵 PMP Online Training: http://bit.ly/3XugO44
🌕 Salesforce Online Training: http://bit.ly/3OsAXDH
🔵 Cybersecurity Online Training: http://bit.ly/3tXgw8t
🌕 Java Online Training: http://bit.ly/3tRxghg
🔵 Big Data Online Training: http://bit.ly/3EvUqP5
🌕 RPA Online Training: http://bit.ly/3GFHKYB
🔵 Python Online Training: http://bit.ly/3Oubt8M
🌕 Azure Online Training: http://bit.ly/3i4P85F
🔵 GCP Online Training: http://bit.ly/3VkCzS3
🌕 Microservices Online Training: http://bit.ly/3gxYqqv
🔵 Data Science Online Training: http://bit.ly/3V3nLrc
🌕 CEHv12 Online Training: http://bit.ly/3Vhq8Hj
🔵 Angular Online Training: http://bit.ly/3EYcCTe

🔴 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐑𝐨𝐥𝐞-𝐁𝐚𝐬𝐞𝐝 𝐂𝐨𝐮𝐫𝐬𝐞𝐬

🔵 DevOps Engineer Masters Program: http://bit.ly/3Oud9PC
🌕 Cloud Architect Masters Program: http://bit.ly/3OvueZy
🔵 Data Scientist Masters Program: http://bit.ly/3tUAOiT
🌕 Big Data Architect Masters Program: http://bit.ly/3tTWT0V
🔵 Machine Learning Engineer Masters Program: http://bit.ly/3AEq4c4
🌕 Business Intelligence Masters Program: http://bit.ly/3UZPqJz
🔵 Python Developer Masters Program: http://bit.ly/3EV6kDv
🌕 RPA Developer Masters Program: http://bit.ly/3OteYfP
🔵 Web Development Masters Program: http://bit.ly/3U9R5va
🌕 Computer Science Bootcamp Program : http://bit.ly/3UZxPBy
🔵 Cyber Security Masters Program: http://bit.ly/3U25rNR
🌕 Full Stack Developer Masters Program : http://bit.ly/3tWCE2S
🔵 Automation Testing Engineer Masters Program : http://bit.ly/3AGXg2J
🌕 Python Developer Masters Program : https://bit.ly/3EV6kDv
🔵 Azure Cloud Engineer Masters Program: http://bit.ly/3AEBHzH

🔴 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐔𝐧𝐢𝐯𝐞𝐫𝐬𝐢𝐭𝐲 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐬

🔵 Post Graduate Program in DevOps with Purdue University:  https://bit.ly/3Ov52lT

🌕 Advanced Certificate Program in Data Science with E&ICT Academy, IIT Guwahati: http://bit.ly/3V7ffrh





Please write back to us at sales@edureka.co or call us at IND: 9606058406 / US: +18885487823 (toll-free) for more information.

## Содержание

### [0:00](https://www.youtube.com/watch?v=cG1W_AgYRRo) Introduction

Hello everyone and welcome to the AI for business course. Artificial intelligence is transforming how organizations operate, make decisions and deliver value to customers. From automating routine task to enabling smarter datadriven decisions, AI is becoming a key driver of business growth and innovation. In this course, you will explore how AI can be applied across different business functions such as marketing, finance, operations, and customer experience. You will learn the core concepts of AI in a simple and practical way and understand how businesses use AI tools and technologies to solve real world challenges. And by the end of this course, you will have a clear understanding of how AI fits into the business landscape and how it can be used to improve efficiency, drive innovation, and create competitive advantage. So before we begin, please like, share, and subscribe to Edureka's YouTube channel, and hit the bell icon to stay updated on the latest tech content from Idureka. Also check out Edureka's generative AI masters program. It is designed to help you build strong expertise in generative AI, Python, data science, NLP, CH, GPT, LLM from engineering and agentic AI. The program focuses on hands-on learning through real world projects, expertled training and industry relevant tools. By enrolling in this generative AI certification program, you will gain practical skills to design, build and deploy modern gene AI applications. So check out the course link given in the description box below. So first let us get started with AI for business. Artificial intelligence is the science of making machine think like humans. It can do things that are considered to be smart. AI technology can process a large amount of data in a way unlike humans. AI is transforming businesses by automating task and analyzing large amount of data predicting their customer needs and market trends. They're also having a marketing campaigns to product development. Ultimately leading and

### [2:11](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=131s) AI for Business

increasing efficiency, better decision making and happier client. So the tangible gain of AI for business would be AI benefits businesses by making them learner and more responsive to customer needs. It could also have reduced cost. AI can identify areas for costsaving like optimizing logistics and automating task that can be reduced. This translates a healthier bottom line for your businesses and even have a previously manual that can be done logistically. Then you would have a personalized customer experience. AI tailor experiences by having individual customers. Imagine there would be a chatbot providing real-time supports and marketing campaigns with a laser focus targeting. AI fosters stronger customer relationship and boosted satisfaction. Now let us learn about datadriven decisions. AI can analyze massive data sets uncovering hidden patterns and trends that would be impossible to see by humans. This gold mine of insight powers businesses to make smarter decisions about everything and marketing campaigns and a product. So this would have a product relation of where there would be marketing and also a new product release. Then we can learn about enhanced productivity by automating task and streamlining. AI frees up a valuable time and resources. This allows businesses to get more down of profit and also the output improvement. Now let's learn how machine learning has brought an impact into AI for business. Machine learning is a sub field of AI. It empowers all the machines to learn from its data like a self-improving student. Imagine a factories that is machines predicting their own breakdowns, stores suggesting products you would love before you would even know it and doctors getting a helping hand for AI for diagnosis. That's the power of machine learning in action. Industry 4. 0. This has brought a great implementation in all the platforms and fields. It is a tech makeover for factories. Imagine machine talking and fixing themselves into where AI and IoT that is internet of things has bought into a base that makes things faster and less waste. This could make a cleaner production and a new jobs but security and cost are a great challenges. It's basically a smarter way to manufacture things. Industry 5. 0 O builds on industry of automation by focusing on human machine collaboration. Imagine robots as a useful assistant that would make a priority for realtime data and even making the roles and responsibilities into action. Industry 5. 0 would make a future for AI for business. How that impacts would be AI will have not only have automation task but also boost the efficiency but also personalized customer experiences and even the data of which it can be created. Have you ever imagined how a world without a digital can work? Yes, I would like to tell you all guys imagine a world where now your work days is smoother. Your customer feels like VIP and also your boss keeps revving about your brilliant ideas. AI will handle a boring stuff and also freeing up a tackle creative challenges. Plus, it can become a customer service with remembering faster details providing a 24x7 services. Soon, even small businesses also will have its encouragement apart from big businesses. Now, let's learn some use cases for AI for business. The first one would be marketing and sales. Marketing and sales will have AI personalization having the prediction of marketing campaigns, product recommendation, content writing and also creative content. The lead generation and scoring will have its frequency, predict its livelihood and bring a focus onto leading leads. As we know, chatboards will have a question and an answer that can be provided 247 to lead its qualification and streamline services interactions. Next would be human resource and finance playing a vital role. The recruitment process where AI can scan résumés and conduct initial candidate screenings saving HR's time professions. Employee onboarding where chatbots can answer new employee questions and guide them through onboarding process. We also have fraud detection that goes under the fraudulent activities and product campaigns for financial losses. the risk management where analyzing market datas and financial statements are professional and also having a potential risk. So I would like to tell you where using AI in business can make things work better and come up with new ideas. It's important and make sure that you're doing it ethically and keeping an eye on what you're doing it for. Well in which today's world is digital. Welcome to our exploration of artificial intelligence. The groundbreaking field shaping the future of the technology, the human interactions from the humble beginning to its deep impact today. AI is to revolutionize industries and reshape what's possible. Join us as we explore into the fascinating world of intelligent machine learning, machines

### [7:38](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=458s) What Is Artificial Intelligence?

algorithms, and search of creative systems that human awareness. Get ready to uncover the incredible potential and ethical consideration of this rapidly evolving field. We on a journey into the heart of artificial intelligence. Now let us learn what is artificial intelligence. AI or artificial intelligence is the branch of computer science where focused on creating systems that were performed task and that would be normally required by human intelligence. These tasks can range understanding of natural language. Secondly, recognizing patterns, then making decisions and lastly learning from the experiences. AI is a collaboration of ideas, methods and knowledge where from the multiple academic disciplines work on a different problem solving and share their knowledge to a better understanding and come up with a good solution. But it can also be rule-based and operate under a set of rules and conditions only. So I would like to tell you all guys a small realtime experience or an experiment done by Alan Turing to propagate and to establish the artificial intelligence. Alan Turing was the first person to conduct the sustainal research in the field that he called machine intelligence. The Turing test was conducted to explore whether machines could exhibit humanlike intelligence proposed by Alan Turing in 1950. It involves a human evaluator communicating with both a human and a machine through a text interface. If a evaluator cannot distinguish between the two based on their responses, the machine is set to have passed the test. It serves as a benchmark of the assessing the progress of AI and discussions of the nature and intelligence and consciousnesses. So this is to evolve and to establish that even machines can work as humans and that is how it is made to bring up the machine's knowledge and the humans knowledge into machines. There are two types of artificial intelligence. First let's talk about weak AI. Before getting into what is weak AI, I would like to tell you with an example that is performed in the real world. I know you'll all guys will be knowing about Alexa, Apple Siri and also self-driving vehicles. So all of these considered as in weak AI. It is also known as a narrow AI or an AI narrow intelligent that is trained by the AI and focused to perform a specific task. Weak AI drives most of the AI that surrounds by us today. Narrow might be a more apt descriptor for this type of AI as it is anything but weak. Next, let us learn about strong AI. Strong AI is made up of artificial general intelligence or artificial super intelligence. Simply it could be told as where a machine would have an intelligence equal to humans where humans will track what AI needs to be done. It would be able to self-aware with the consciousness that would be this ability to solve problems, learn and also plan for the future. Have you ever thought what does AI do at its core? I'm here to tell you why AI is essential. It works by analyzing a lot of data to find patterns and a useful information. It even learns from this data to get better at task over time. With this learning, AI can make decisions, predict future events, and do task automatically that would normally need human intelligence. This helps business and other organizations work more efficiently. So AI is about making computers smarter and more helpful in everyday life. So let me tell you some use cases that is happening in daily users or daily real life based. Firstly I have taken is about cyber security as it is a very important and a vital role for many platforms hereafter. Cyber security is a critical concern for individual businesses and also governments. As cyber threats continue to evolve in a complexity and sophistication. It would play a very vital role for argumenting cyber security defenses. It is all based on the false positive effects that is made by the false information given by any criteria leading the alert of fatigue and reduced operational efficiencies. Cyber attacks can exploit vulnerabilities in AI models by invading detection and compromising security defenses. they have some private data where it is trained on the basis of incomplete data sets may produce the outcomes of private data. This will raise an ethical concern and also regulatory compliances issues having a very complicated problems too. So the next one will be your entertainment. As people know entertainment is taking a very huge part in everyone's life. For example, it would be a social media too. So AI is transforming the entertainment industry by revolutionizing content creation, personalization and audience engagement. So this can be such as television, gaming, music and also digital media platforms. One significant use of AI in entertainment is personalized content recommendation. So personalized entertainment experiences are enhanced through AI driver is recommended through systems. They are even having some platforms like Netflix, Prime, Amazon and also Spotify that is making people engaged in a very hype as of now. These recommendations improve over time. So I would like to conclude by telling artificial intelligence can do amazing things like analyzing data and making task easier. But it also brings up huge question about fairness, jobs and who controls it. We need to be careful in how we develop and use the AI. Making sure it helps everyone and doesn't cause harm at all. So this makes easier for people to get into creativity and make rules and guidelines. History of artificial intelligence. The concept of AI goes back to the classical ages. Under Greek mythology, the concept of machines and mechanical men were well thought of. An example is Talos. Talos was supposedly a giant animated bronze warrior who was programmed to guard the island of Cree. Now, let's get back to

### [14:19](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=859s) Types Of Artificial Intelligence

the 19th century. In 1950, Alan Turing proposed the Turing test. The Turing test basically determines whether or not a computer can intelligently think like a human being. The touring test was the first serious proposal in the philosophy of artificial intelligence. 1951 marked the era for game artificial intelligence. This period was called game AI because here a lot of computer scientists developed programs for checkers and for chess. However, these programs were later rewritten and redone in a better way. 1956 marked the most important year for artificial intelligence. During this year, John McCarthy first coined the term artificial intelligence. This was followed by the first AI laboratory which was set up in 1959. MIT AI lab was the first setup which was basically dedicated to the research of AI. In 1960, the first robot was introduced to the General Motors assembly line. In 1961, the first AI chatbot called Eliza was introduced. In 1997, IBM's Deep Blue beats the world champion Gary Caspro in the game of chess. 2005 marks for the year when an autonomous robotic car called Stanley won the DARPA Grand Challenge. In 2011, IBM's question answering machine Watson defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings. So, that was a brief history of AI. Now guys, since the emergence of artificial intelligence in 1950s, we have seen an exponential growth in its potential. AI covers domains such as machine learning, deep learning, neural networks, natural language processing, knowledge base, expert systems, and so on. So now let's understand the different stages of artificial intelligence. So basically when I was doing my research I found a lot of videos and a lot of articles that stated that artificial general intelligence, artificial narrow intelligence and artificial super intelligence are the different types of AI. If I have to be more precise with you then artificial intelligence has three different stages right the types of AI are completely different from the stages of AI. So under the stages of artificial intelligence we have artificial narrow intelligence, artificial general intelligence and artificial super intelligence. So what is artificial narrow intelligence? Artificial narrow intelligence also known as weak AI is a stage of artificial intelligence that involves machines that can perform only a narrowly defined set of specific tasks. Right? At this stage the machines don't possess any thinking ability. They just perform a set of predefined functions. Examples of weak AI include Siri, Alexa, Alph Go, Sophia, the self-driving cars and so on. Almost all the AI based systems that are built till this date fall under the category of weak AI or artificial narrow intelligence. Next, we have something known as artificial general intelligence. Artificial general intelligence is also known as strong AI. This stage is the evolution of artificial intelligence wherein machines will possess the ability to think and make decisions just like human beings. There are currently no existing examples of strong AI, but it's believed that we will soon be able to create machines that are as smart as human beings. Strong AI is actually considered a threat to human existence by many scientists. This includes Stephven Hawkings. Stephen Hawkings quoted that the development of full artificial intelligence could spell the end of human race. Moving on to our last stage which is artificial super intelligence. Artificial super intelligence is that stage of AI when the capability of computers will surpass human beings. Artificial super intelligence is currently seen as a hypothetical situation as depicted in movies and science fiction books. You see a lot of movies which show that machines are taking over the world. All of that is artificial super intelligence. Now I believe that machines are not very far from reaching the stage taking into consideration our current pace. However, such systems don't currently exist. Right? We don't have any machine that is capable of thinking better than a human being or reasoning in a better way than a human. Artificial super intelligence basically any robot that is much smarter than humans. Now moving on to the different types of artificial intelligence. Based on the functionality of AI based systems, artificial intelligence can be categorized into four types. The first type is reactive machines AI. This type of AI includes machines that operate solely based on the present data and take into consideration only the current situation. Reactive AI machines cannot form inferences from the data to evaluate any future actions. They can perform a narrowed range of predefined tasks. An example of reactive AI is the famous IBM chess program that beat the world champion Gary Caspro. This is one of the most impressive AI machines built so far. Next, we have limited memory AI. Now, like the name suggests, limited memory AI can make informed and improved decisions by studying the past data from its memory. So such an AI has a short-lived or you can say a temporary memory that can be used to store past experiences and hence evaluate your future actions. Self-driving cars are limited memory AI that use the data collected in the recent past to make immediate decisions. For example, self-driving cars use sensors to identify civilians that are crossing the road. They identify any steep roads or traffic signals and they use this to make better driving decisions. This also helps in preventing any future accidents. Next, we have something known as theory of mind artificial intelligence. The theory of mind AI is a more advanced type of artificial intelligence. This category is speculated to play a very important role in psychology. This type of AI will mainly focus on emotional intelligence so that human beliefs and thoughts can be better comprehended. The theory of mind AI has not been fully developed yet, but rigorous research is happening in this area. Moving on to our last type of artificial intelligence is the self-aware artificial intelligence. So guys, let's just fold hands and pray that we don't reach the state of AI where machines have their own consciousness and become self-aware. This type of AI is a little far-fetched, but in the future achieving a stage of super intelligence might be possible. Geniuses like Elon Musk and Stephen Hawings have constantly warned us about evolution of AI. So guys, let me know your thoughts in the comment section. Do you ever think we'll reach the stage of artificial super intelligence? Moving on to the last topic of today's session is the different domains or the different branches of artificial intelligence. So artificial intelligence can be used to solve real world problems by implementing machine learning, deep learning, natural language processing, robotics, expert systems and fuzzy logic. Now guys, these are the different domains or you can say the different branches that AI uses in order to solve any problem. Recently AI is also being used as an application in computer vision and image processing. Right? For now, let me tell you briefly about each of these domains. Machine learning is basically the science of getting machines to interpret, process, and analyze data in order to solve real world problems. Right? Under machine learning, there's supervised, unsupervised, and reinforcement learning. If any of you are interested in learning about these technologies, I'll leave a link in the description box. You all can go through that content. Next, we have deep learning or neural networks. So, deep learning is a process of implementing neural networks on highdimensional data to gain insights and form solutions. It is basically the logic behind the face verification algorithm on Facebook. It is the logic behind the self-driving cars virtual assistants like Siri and Alexa. Then we have natural language processing. Natural language processing refers to the science of drawing insights from natural human language in order to communicate with machines and grow businesses. So an example of NLP is Twitter and Amazon. Twitter uses NLP to filter out terroristic language in their tweets. Amazon uses NLP to understand customer reviews and improve user experience. Then we have robotics. Robotics is a branch of artificial intelligence which focuses on the different branches and applications of robots. AI robots are artificial agents which act in the real world environment to produce results by taking some accountable actions. So I'm sure all of you have heard of Sophia. Sophia the humanoid is a very good example of AI in robotics. Then we have fuzzy logic. So fuzzy logic is a computing approach that is based on the principle of degree of truth instead of the usual modern logic that we use which is basically the boolean logic. Fuzzy logic is used in medical fields to solve complex problems which involve decision making. It is also used in automating gear systems in your cars and all of that. Then we have expert systems. An expert system is an AI based computer system that learns and reciprocates the decision-making ability of a human expert. Expert systems use if then logic notions in order to solve any complex problem. They do not rely on conventional procedural programming. Expert systems are mainly used in information management. They are seen to be used in fraud detection, virus detection, also in managing medical and hospital records and so on. These are the term which have confused a lot of people and if you two are one among them let me resolve it for you. Well, artificial intelligence is a broader umbrella under which machine learning and deep learning come. You can also see in the diagram that even deep learning is a subset of machine learning. So you can say that all three of them the AI the machine learning and

### [24:33](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=1473s) AI vs Machine Learning vs Deep Learning

deep learning are just the subset of each other. So let's move on and understand how exactly they differ from each other. So let's start with artificial intelligence. The term artificial intelligence was first coined in the year 1956. The concept is pretty old but it has gained its popularity recently. But why? Well, the reason is earlier we had very small amount of data. The data we had was not enough to predict the accurate result. But now there's a tremendous increase in the amount of data. Statistics suggest that by 2020 the accumulated volume of data will increase from 4. 4 zetabytes to roughly around 44 zetabytes or 44 trillion GBs of data. Along with such enormous amount of data, now we have more advanced algorithm and high-end computing power and storage that can deal with such large amount of data. As a result, it is expected that 70% of enterprise will implement AI over the next 12 months, which is up from 40% in 2016 and 51% in 2017. Just for your understanding, what is AI? Well, it's nothing but a technique that enables the machine to act like humans by replicating the behavior and nature. With AI, it is possible for machine to learn from the experience. The machines adjust their responses based on new input, thereby performing humanlike tasks. Artificial intelligence can be trained to accomplish specific tasks by processing large amount of data and recognizing pattern in them. You can consider that building an artificial intelligence is like building a church. The first church took generations to finish. So most of the workers who were working in it never saw the final outcome. Those working on it took pride in their crafts, building bricks and chiseling stone that was going to be placed into the great structure. So as AI researchers we should think of ourself as humble brick makers whose job is to study how to build components example parses planners or learning algorithm or etc anything that someday someone and somewhere will integrate into the intelligent systems. Some of the examples of artificial intelligence from our day-to-day life are Apple series chess playing computer Tesla's self-driving car and many more. These examples are based on deep learning and natural language processing. Well, this was about what is AI and how it gained its hype. So, moving on ahead, let's discuss about machine learning and see what it is and why it was even introduced. Well, machine learning came into existence in the late 80s and the early '90s. But what were the issues with the people which made the machine learning come into existence? Let us discuss them one by one. In the field of statistics, the problem was how to efficiently train large complex model. In the field of computer science and artificial intelligence, the problem was how to train more robust version of AI system. While in the case of neuroscience, problem faced by the researchers was how to design operational model of the brain. So these were some of the issues which had the largest influence and led to the existence of the machine learning. Now this machine learning shifted its focus from the symbolic approaches it had inherited from the AI and moved towards the methods and model it had borrowed from statistics and probability theory. So let's proceed and see what exactly is machine learning. Well, machine learning is a subset of AI which enables the computer to act and make datadriven decisions to carry out a certain task. These programs or algorithms are designed in a way that they can learn and improve over time when exposed to new data. Let's see an example of machine learning. Let's say you want to create a system which tells the expected weight of a person based on its height. The first thing you do is you collect the data. Let's see this is how your data looks like. Now each point on the graph represent one data point. To start with, we can draw a simple line to predict the weight based on the height. For example, a simple line W= H minus 100 where W is weight in kgs and H is height in centimeter. This line can help us to make the prediction. Our main goal is to reduce the difference between the estimated value and the actual value. So in order to achieve it, we try to draw a straight line that fits through all these different points and minimize the error. So our main goal is to minimize the error and make them as small as possible. Decreasing the error or the difference between the actual value and estimated value increases the performance of the model. Further on, the more data points we collect, the better our model will become. We can also improve our model by adding more variables and creating different prediction lines for them once the line is created. So from the next time if we feed a new data for example height of a person to the model it would easily predict the data for you and it will tell you what his predicted weight could be. I hope you got a clear understanding of machine learning. So moving on ahead let's learn about deep learning. Now what is deep learning? You can consider deep learning model as a rocket engine and its fuel is its huge amount of data that we feed to these algorithms. The concept of deep learning is not new but recently its hype has increased and deep learning is getting more attention. This field is a particular kind of machine learning that is inspired by the functionality of our brain cells called neuron which led to the concept of artificial neural network. It simply takes the data connection between all the artificial neurons and adjust them according to the data pattern. More neurons are added if the size of the data is large. It automatically features learning at multiple levels of abstraction thereby allowing a system to learn complex function mapping without depending on any specific algorithm. You know what no one actually knows what happens inside a neural network and void works so well. So currently you can call it as a black box. Let us discuss some of the example of deep learning and understand it in a better way. Let me start with an simple example and explain you how things happen at a conceptual level. Let us try and understand how you recognize a square from other shapes. The first thing you do is you check whether there are four lines associated with a figure or not. Simple concept, right? If yes, we further check if they are connected and closed. Again if yes we finally check whether it is perpendicular and all its sides are equal. Correct? If everything fulfills yes it is a square. Well it is nothing but a nested hierarchy of concepts. What we did here we took a complex task of identifying a square in this case and broke it into simpler task. Now this deep learning also does the same thing but at a larger scale. Let's take an example of machine which recognizes the animal. The task of the machine is to recognize whether the given image is of a cat or of a dog. What if we were asked to resolve the same issue using the concept of machine learning? What we would do? First, we would define the features such as check whether the animal has whiskers or not or check if the animal has pointed ears or not or whether its tail is straight or curved. In short, we will define the facial features and let the system identify which features are more important in classifying a particular animal. Now when it comes to deep learning, it takes this to one step ahead. Deep learning automatically finds out the feature which are most important for classification. Comparing to machine learning where we had to manually give out that features. By now I guess you have understood that AI is the bigger picture and machine learning and deep learning are its subpart. So let's move on and focus our discussion on machine learning and deep learning. The easiest way to understand the difference between the machine learning and deep learning is to know that deep learning is machine learning. More specifically, it is the next evolution of machine learning. Let's take few important parameter and compare machine learning with deep learning. So starting with data dependencies. The most important difference between deep learning and machine learning is its performance. as the volume of the data gets increased. From the below graph, you can see that when the size of the data is small, deep learning algorithm doesn't perform that well. But why? Well, this is because deep learning algorithm needs a large amount of data to understand it perfectly. On the other hand, the machine learning algorithm can easily work with smaller data set. Fine. Next comes the hardware dependencies. Deep learning algorithms are heavily dependent on high-end machines. While the machine learning algorithm can work on low-end machines as well. This is because the requirement of deep learning algorithm include GPUs which is an integral part of its working. The deep learning algorithm require GPUs as they do a large amount of matrix multiplication operations and these operations can only be efficiently optimized using a GPU as it is built for this purpose only. Our third parameter will be feature engineering. Well, feature engineering is a process of putting the domain knowledge to reduce the complexity of the data and make patterns more visible to learning algorithms. This process is difficult and expensive in terms of time and expertise. In case of machine learning, most of the features are needed to be identified by an expert and then handcoded as per the domain and the data type. For example, the features can be a pixel value, shapes, texture, position, orientation or anything. Fine. The performance of most of the machine learning algorithm depends on how accurately the features are identified and extracted. Whereas in case of deep learning algorithms, it try to learn highle features from the data. This is a very distinctive part of deep learning which makes it way ahead of traditional machine learning. Deep learning reduces the task of developing new feature extractor for every problem. Like in the case of CNN algorithm, it first try to learn the low-level features of the image such as edges and lines and then it proceeds to the parts of faces of people and then finally to the highle representation of the face. I hope the things are getting clear to you. So let's move on ahead and see the next parameter. So our next parameter is problem solving approach. When we are solving a problem using traditional machine learning algorithm, it is generally recommended that we first break down the problem into different subp parts, solve them individually and then finally combine them to get the desired result. This is how the machine learning algorithm handles the problem. On the other hand, the deep learning algorithm solves the problem from end to end. Let's take an example to understand this. Suppose you have a task of multiple object detection and your task is to identify what is the object and where it is present in the image. So let's see and compare how will you tackle this issue using the concept of machine learning and deep learning. Starting with machine learning. In a typical machine learning approach, you would first divide the problem into two step. First object detection and then object recognization. First of all, you would use a bounding box detection algorithm like grab cut for example. to scan through the image and find out all the possible objects. Now once the objects are recognized you would use object recognization algorithm like SVM with hog to recognize relevant objects. Now finally when you combine the result you would be able to identify what is the object and where it is present in the image. On the other hand in deep learning approach you would do the process from end to end. For example, in a yoloet, which is a type of deep learning algorithm, you would pass an image and it would give out the location along with the name of the object. Now, let's move on to a fifth comparison parameter. It's execution time. Usually, a deep learning algorithm takes a long time to train. This is because there are so many parameter in a deep learning algorithm that makes the training longer than usual. The training might even last for 2 weeks or more than that if you're training completely from the scratch. Whereas in the case of machine learning, it relatively takes much less time to train ranging from a few weeks to few hours. Now the execution time is completely reversed when it comes to the testing of data. During testing the deep learning algorithm takes much less time to run. Whereas if you compare it with a KN& N algorithm which is a type of machine learning algorithm, the test time increases as the size of the data increase. Last but not the least, we have interpretability as a factor for comparison of machine learning and deep learning. This factor is the main reason why deep learning is still thought 10 times before anyone uses it in the industry. Let's take an example. Suppose we use deep learning to give automated scoring to essays. The performance it gives in scoring is quite excellent and is near to the human performance. But there's an issue with it. It does not reveal why it has given that score. Indeed, mathematically it is possible to find out that which node of a deep neural network were activated. But we don't know what the neurons were supposed to model and what these layers of neuron were doing collectively. So we failed to interpret the result. On the other hand, machine learning algorithm like decision tree gives us a crisp rule for what it chose and what it chose. So it is particularly easy to interpret the reasoning behind it. Therefore the algorithms like decision tree and linear or logistic regression are primary used in industry for interpretability. So let us understand when we have machine learning why do we need deep learning that is we'll look at various limitations of machine learning. Now the first limitation is high dimensionality of the data. Now the data that is now generated is huge in size. So we have a very large number of inputs and outputs. So due to that machine learning algorithms fail. So they cannot deal

### [38:17](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=2297s) What is Deep Learning

with high dimensionality of data or you can say data with large number of inputs and outputs. Now there's another problem as well in which it is unable to solve the crucial AI problems which can be natural language processing, image recognition and things like that. Now one of the biggest challenges with machine learning models is feature extraction. Now let me tell you what are features. So in statistics we consider features as variables. But when we talk about artificial intelligence, these variables are nothing but the features. Now what happens because of that the complex problems such as object recognition or handwriting recognition becomes a huge challenge for machine learning algorithms to solve. Now let me give you an example of this feature extraction. Suppose if you want to predict that whether there will be a match today or not. So it depends on a various features. It depends on the whether the weather is sunny, whether it is windy, all those things. So we have provided all those features in our data set. But we have forgot one particular feature that is humidity. And now our machine learning models are not that efficient that they will automatically generate that particular feature. So this is one huge problem or you can say limitation with machine learning. Now obviously we have limitation and it won't be fair that if I don't give you the solution to this particular problem. So we'll move forward and understand how deep learning solves these kind of problems. Now as you can see that the first line on your slide which says that deep learning models are capable to focus on right features by themselves requiring little guidance from the programmer. So with the help of little guidance what these deep learning models can do they can generate their features on which the outcome will depend on and at the same time it also solves the dimensionality problem as well. If you have very large number of inputs and outputs you can make use of a deep learning algorithm. Now what exactly is deep learning? Again since we know that it has been evolved by machine learning and machine learning is nothing but a subset of artificial intelligence and the idea behind artificial intelligence is to imitate the human behavior. The same idea is for the deep learning as well is to build learning algorithms that can mimic brain. Now let us move forward and understand deep learning what exactly it is. Now the deep learning is implemented with the help of neural networks and the idea or the motivation behind neural networks are nothing but neurons. What are neurons? These are nothing but your brain cells. Now here's a diagram of neuron. So we have dendrites here which are used to provide input to our neuron. As you can see we have multiple dendrites here. So these many inputs will be provided to our neuron. Now this is called cell body and inside the cell body we have a nucleus which performs some function. After that output will travel through exxon and it will go towards the exxon terminals and then this neuron will fire this output towards the next neuron. Now the studies tell us that the next neuron or you can say the two neurons are never connected to each other. There's a gap between them. So that is called a synapse. So this is how basically a neuron works like on the right hand side of your slide you can see an artificial neuron. Now let me explain you that. So over here similar to neurons we have multiple inputs. Now these inputs will be provided to a processing element like our cell body. And over here in the processing element what will happen? summation of your inputs and weights. Now when it moves on then what will happen? This input will be multiplied without weights. So in the beginning what happens? These weights are randomly assigned. So what will happen if I take the example of X1? So X1 multiplied by W1 will go towards the processing element. Similarly X2 and W2 element and similarly the other inputs as well. And then summation will happen which will generate a function of S that is f of S. After that comes the concept of activation function. Now what is activation function? It is nothing but in order to provide a threshold. So if your output is above the threshold then only this neuron will fire otherwise it won't fire. So you can use a step function as an activation function or you can even use a sigmoid function as your activation function. So this is how an artificial neuron looks like. So a network will be multiple neurons which are connected to each other will form an artificial neural network. And this activation function can be a sigmoid function or a step function. That totally depends on your requirement. Now once it exceeds the threshold it'll fire. After that what will happen? It will check the output. Now if this output is not equal to the desired output. So these are the actual outputs and we know the real outputs. So we'll compare both of that and we'll find the difference between the actual output and the desired output. On the basis of that difference we are again going to update our weights. And this process will keep on repeating until we get the desired output as our actual output. Now this process of updating weight is nothing but your back propagation method. So this is neural networks in a nutshell. So we'll move forward and understand what are deep networks. So basically deep learning is implemented by the help of deep networks and deep networks are nothing but neural networks with multiple hidden layers. Now what are hidden layers? Let me explain you that. So you have inputs that comes here. So this will be your input layer. After that some process happens and it'll go to the next node or you can say to the hidden layer nodes. So this is nothing but your hidden layer one. So every node is interconnected. If you can notice after that you have one more hidden layer where some function will happen. And as you can see that again these nodes are interconnected to each other. After this hidden layer two comes the output layer. And this output layer again we are going to check the output whether it is equal to the desired output or not. If it is not we are again going to update the weights. So this is how a deep network looks like. Now there can be multiple hidden layers. There can be hundreds of hidden layers as well. But when we talk about machine learning that was not the case. We were not able to process multiple hidden layers when we talk about machine learning. So because of deep learning we have multiple hidden layers at once. Now let us understand this with an example. So we'll take an image which has four pixels. So if you can notice we have four pixels here among which the top two pixels are bright that is they're black in color whereas the bottom two pixels are white. Now what happens we'll divide these pixels and we'll send these pixels to each and every node. So for that we need four nodes. So this particular pixel will go to this node. It This pixel will go to this node and finally particular node that I'm highlighting with my cursor. Now what happens? We provide them random weights. So these white lines actually represent the positive weights and these black lines represents the negative weights. Now this particular brightness when we display high brightness we'll consider it as negative. Now what happens when you see the next output or the next hidden layer it'll be provided with the input with this particular layer. So this will provide an input with positive weight to this particular node and the second input will come from this particular node. Since both of them are positive so we'll get this kind of a node. Similarly this node as well. Now when I talk about these two nodes, the first node over here, so this is getting input from this node as well as from this node. Now over here we have a negative weight. So because of that the value will be negative and we have represented that with black color. Similarly over here as well we getting one input from here which has a negative weight and we another input from here which has again has a negative weight. So accordingly we get again a negative value here. So these two becomes black in color. Now if you notice what will happen next? will provide one input here which will be negative and a positive weight which will be again negative and this will be also negative and a positive weight. So that will again come out to be negative. So that is why we have got this kind of a structure. If you notice this, this is nothing but the inverse of this particular image. When I talk about this node over here, we are getting the negative value with a positive weight which is negative and a negative value with a negative weight which is positive. So we are getting something which is positive here. Now obviously I want this particular image to get inverse. I want these black strips to come up. So what I'll do I'll actually calculate the inverse by providing a negative weight like this. So over here I've provided a negative weight it'll come up. So when I provide a positive weight so it'll stay wherever it is. After that it'll detect and the output you can see will be a horizontal image not a solid not a vertical not a diagonal but a horizontal. And after that we are going to calculate the difference between the actual output and the desired output. And we are going to update the weights accordingly. Now this is just an example guys. So guys this is one example of deep learning where what happens we have images here. We provide these raw data to that first layer to the input layer. Then what happens these input layers will determine the patterns of local contrast or it'll fixate those which means that it will differentiate on the basis of colors and luminosity and all those things. So it'll differentiate those things and after that in the following layer what will happen? It'll determine the face features. It'll fixate those face features. So it'll form nose, eyes, ears, all those things. Then what will happen? It'll accumulate those correct features for the correct face or you can say that and fixate those features on the correct face template. So it'll actually determine the faces here as you can see it over here and then it'll be sent to the output layer. Now basically you can add more hidden layers to solve more complex problem. For example, if I want to find out a particular kind of face, for example, a face which has large eyes or which has light complexion. So I can do that by adding more hidden layers and I can increase the complexity also at the same time if I want to find which image contains a dog. So for that also I can have one more hidden layer. So as and when hidden layer increases we are able to solve more and more complex problem. So this is just a general overview of how a deep network looks like. So we have first patterns of local contrast in the first layer. Then what happens? We fixate these patterns of local contrast in order to form the face features such as eyes, nose, ears etc. And then we accumulate these features for the correct face and then we determine the image. So this is how deep learning network or you can say deep network looks like. And I'll give you some applications of deep learning. So here are a few It can be used in self-driving cars. So you must have heard about self-driving cars. So what happens? It'll capture the images around it. It'll process that huge amount of data and then it'll decide what action should it take. Should it take left, right? Should it stop? So accordingly it'll decide what action should it take and that will reduce the amount of accidents that happens every year. Then when we talk about voice control assistance I'm pretty sure you must have heard about Siri. All the iPhone users know about Siri right. So you can tell Siri whatever you want to do it'll search it for you and display for you. Then when we talk about automatic image caption generation so what happens in this whatever image that you upload the algorithm is in such a way that will generate the caption accordingly. So for example, if you have say blue colored eyes, so it'll display a blue colored eye caption at the bottom of the image. Now when I talk about automatic machine translation, so we can convert English language into Spanish. Similarly, Spanish to French. So basically automatic machine translation, you can convert one language to another language with the help of deep learning. And these are just few examples guys. There are many other examples of deep learning. It can be used in game playing. It can be used in many other things. And let me tell you one very fascinating thing that I've told you in the beginning as well. With the help of deep learning, MIT is trying to predict future. So yeah, I know it is growing exponentially right now guys. Artificial intelligence is changing how businesses operate worldwide. It's opening up exciting new possibilities for startups to innovate, shake up industry, and grow rapidly. Essentially AI involves teaching computers to do things that normally only human can do. So this could means anything from understanding human language to spotting

### [49:30](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=2970s) What is LLM (Large Language Model)

patterns in data, predicting trends and even making decision. And for startups, embracing AI in their products or services can bring big benefits. It can make things run smoother, give customer better experience and help companies make sense of huge amount of information. And by using AI tools like machine learning, natural language processing, computer vision, and robotics, startups can generate fresh ideas that meet specific customer needs and help them stand out in today's growing market. As we all know, artificial intelligence is a type of technology that allows computers and machines to perform tasks that normally require human intelligence. And these tasks include understanding language, recognizing features, making decisions, and even solving problems. So now let's discuss some of the benefits of using AI in startup. And the first thing is efficiency. AI helps startup do task faster and smoother by automating repetitive jobs, freeing up time for important work and innovation. And then the cost saving. So by using AI to do task, startups can save money by needing lesser people for certain jobs, making them more profitable, especially for those with limited resources. Next, enhanced customer experience. AI lets startup offer personalized experience to customers, helping them feel understood and cared for, which boosts satisfaction and loyalty. Then, datadriven decision-m AI analyzes lots of data quickly, helping startup make smarter decision about their product, customers, and business strategies. And finally, competitive advantage. By using AI, startups can be more innovative, efficient, and unique in the market, setting them apart from competitors and attracting more customers and investors. Now, let's discuss some of the common AI application in startup with examples. AI has a wide range of applications that are particularly beneficial for startups looking to innovate and streamline their operation. So, first we have customer service and support. Startup use AI powered chatboards with natural language processing algorithm to handle customer inquiries for 24 by7. If for example a fashion e-commerce startup uses chatbot to help customers with product searches, order tracking and return processing. Next, personalization and recommendation. AI analyzes user behavior to suggest product content or services personalized to individual preferences using machine learning algorithm. for example, a streaming service. And next, we have marketing and sales automation. By using predictive analytics, startups create highly targeted ad campaign by analyzing customer data. For example, a digital marketing startup uses AI to target specific demographics with personalized ads and improving conversion rates. And then we have data analysis and insights. AI predicts future trends and customer behavior by analyzing historical data by using predictive analytics and machine learning algorithm. For example, a fintex startup uses AI to analyze financial data and predictive market trends helping investor make informed decision. Then comes the fraud deduction and security. AI identifies unusual patterns that may indicate fraudulent activities by using anomaly deduction algorithm and machine learning models. And here also let's consider an example. A payment processing startup uses AI to detect fraudulent transactions and prevent financial losses. Next we have healthcare and biotech. AI assist and diagnizing diseases by analyzing medical images and patient data by using deep learning algorithms. For example, a health tech startup uses AI to analyze X-rays and MRI scans helping doctors diagn. Next we have inventory and supply chain management. AI predicts inventory needs to optimize stock level and reduce waste. So here an online grocery startup uses AI to forecast demand for various products and ensure it stocks the right amount. Then comes the natural language processing. AI translate text and speech in real time enabling communication across different languages. For example, a travel tech startup uses AI powered translation tool to help tourists communicate in foreign countries. And then image and video analysis. AI processes and analyzes visual data using computer vision algorithm for applications like facial recognition and object detection. For example, a security startup uses AI for facial recognition to enhance building access control system. Finally, we have robotic process automation. RPA boards powered by machine learning algorithms, perform repetitive task like data entry and report generation. So for example, a fent startup uses RPA to automate compliance reporting reducing the time and effort required by human employees. Now let's explore some common ethical considerations of AI in startup. First we have bias and fairness. Startups must ensure that their AI systems are not biased and do not discriminate against any individual or groups. They should actively work to identify and mitigate biases in AI algorithm to ensure fair outcomes for all users. And then we have privacy and data protection. Startups need to prioritize protecting user data and privacy when developing and deploying AI technologies. They should implement robust data protection measures and transparent data practices to safeguard user privacy and ensure compliance with relevant regulations. And then we have transparency and accountability. So startups should be transparent about how their AI system work and the decisions they make. They should provide explanation for AIdriven decisions and establish mechanism for accountability in case of errors or unintended consequences. Next, it is also important for safety and reliability. Startups must prioritize the safety and reliability of their AI systems to prevent harm to users of society. So, they should conduct thorough testing and validations to ensure that AI technologies perform as intended and do not pose risk to individual or communities. Next is the impact on jobs in society. Startups should consider the broader societal impact of the AI technologies including their effect on employment inequality and social cohesions. They should proactively address potential job displacement and invest in strategies to mitigate negative societal impacts while maximizing the benefits of AI adoption. Now let us explore some of the top AI tools for startup. We have Charge GPD for startups. Charge GPD can assist startups in several ways. So it can help generate new ideas, conduct market research and refine business plans. Also, it allows startups to create content for website and social media. Offers automated customer support and also provides insight for product development. Chip can advise on strategies and decision making while serving as a valuable resource for learning and education. It's a versatile tool that supports startups at every stage of their journey from inceptions to growth. The next AI tool for startup is Midjourney. So imagine a tech startup wanting to impress investors with its future product idea. So using Midjourney, they can simply give a few clear instructions or clues to the AI and in no time, Midani creates a series of images showing exactly how the product will look and function. So it's an easy and effective way to bring their vision to life for potential backers. The next tool is GitHub Copilot. GitHub Copilot is a gamecher for startups because it turbocharges their coding process. So with GitHub's help, startup developers can write code faster and more accurately. This means they spend less time on routine task and more time solving complex problems and building innovative solutions. So ultimately, Copilot boosts productivity and enables startups to iterate on their ideas more efficiently, helping them bring their products to market quicker and stay ahead of the competition. Imagine AI has the brains behind the scenes in the newest wave of the industry. It's like having super smart assistance for machines, helping them work better, faster, and smarter. AI impact on the real world is vast and rapidly growing. It's over many aspects of our daily lives. Even we don't realize it. The impact of AI is vast and multifested, rippling across various sectors and aspects of human life. The future of AI is bright but it's upon us to ensure how we use it by harness its potential for the progress and mitigation its risks. AI can be powerful tool for positive change in the world. The positive impact that has been bought by AI is efficiency and automations automates task in numerous industries where in which from manufacturing to the customer services. This frees up human works for more complex jobs and can significantly boost the productivity. Enhanced decision making. AI can have massive data sets to identify trends and patterns that is invisible by humans. This allows better informed decisions in areas like finance, healthcare and also logistics. Realtime optimizations. AI can analyze data from sensors and machines in real time to continuously optimize product and production process. Industry 4. 0. The brief of industry 4. 0 to where in which is also known as the fourth industrial revolution is like a high-tech makeover for factories. Imagine this machines talking to each other fixing themselves even they break down and even personalizing the products based on the real-time data. The establishment of industry 4. 0 or the birth that has been traced. This has been started from the German initiatives at 2011. The concept of industry 4. 0 first emerged in Germany as a part of their high-tech strategy. The government has recognized firstly the need for modernization in the manufacturing launched initiatives to promote the computerization and automation in the factories. From all these sparks the industry 4. 0 has given a very large and the vast global movement by in which the advancement in areas like AI that is artificial intelligence, IoT, the internet of things and also the robotics have fueled its development and countries around the world are now exploring ways to implement these concept on their own manufacturing sectors. Now let's learn about the evolution not revolution. That means industry 4. 0 to isn't a sudden and a complete overhaul but rather an ongoing evolution building up previous industrial revolutions. It leverages existing technologies also take them to the next level of the automation and interconnectedness. The global considerations the development of industry 4. 0 standards and the best practices is a collaborative effect. International organizations and research institutions play a role in facilitating knowledge sharing and also compatibility across different manufacturing ecosystems. Let's learn some use cases or the real-time examples that have been bought into. First, the predictive maintenance in the aerospace. The challenges that have been faced by this aerospace would be aeroperanes require meticulous maintenance to safely ensure traditional methods also relay a scheduling inspections which can be disruptive and expensive. The solutions that have been bought by industry 4. 0 is the sensors embedded in the aircraft engines to collect the real-time data and performance parameters. AI algorithms analyze this data to predict the potential failure before they even occur allowing for the predictive maintenance and also avoiding cost breakdown. We have the collaborative robotics in the electronic manufacturing. The challenges mainly the faced by this electronic robotics would be that certain task in the electronic assembly involve repetitive and delicate maneuvers often performed by human workers. The industry 4. 0 to has bought a solution where collaborative robots and cobots have been worked in the center stage. These robots work alongside human workers handling tenentious task like soliding and compound placement. These frees up how human workers for task requiring dexitity and problem solving and quality control too. They have all the measures where data can be analyzed and also bought the production processes. So we all know that's the magic of AI in the industry 4. 0. AI acts as a brains turning factories into datadriven powerhouses. It crunches number of predictive problems and optimizes processes in the realtime decision making. AI industry is not only having robots taking over. It's about collaboration where in which humans and the machines have great interaction where it focuses on innovation and also problem solving. There are many challenges that have been bought into. This future is fulfilled with exciting possibilities not only with high-tech marbles but for a new era of human machines collaboration for a better world. Imagine the world just a few years back when technology quietly advanced in the background. Then suddenly generative AI hit the headlines. Media channels everywhere were a buzz with articles exploring the capabilities and potential of this new wave of technology. The real tipping point came in March 2023 when OpenAI launched GPT4, a model so advanced it could outperform 90% of human test takers on the SAT which determines college admission in the US. However, JP4's capabilities extended far beyond academics. Open AI revealed that it also excelled in fields like law and medicine taking test and demonstrating proficiency in knowledge intensive domains. Within just 2 months of its release, chat GPT powered by GPT4 had captivated over 100 million users. This unprecedented adaption made waves sparking discussions on AI's role in the future of work, communication, and knowledge sharing. Yet alongside the fascination came a fair share of concerns. Experts began to speculate about the future. Could AI evolution hit a plateau by 2030? The excitement of this advancement was tempered by reports predicting that AI could significantly impact the job market as tools like Chad GPD found real world application in areas as high stakes as legal trials where lawyers were reportedly using large language models to assist in cases. In today's transformative era, it's clear that generative AI is reshaping our world. So what exactly is generative AI and how does it work? More importantly, what does the rise of GPT4 means for us and where might it lead? So what is generative AI? Simply put, it's a type of artificial intelligence that can produce new content across various formats. Here are a few areas where it's making a huge impact. First content creation. It is generating text, articles and blog post. Saving arts for writers and content creators. Next, image generation. Creating visuals or art from the text bronze as seen in models like DAL e now on its third version. Next, coding assistance providing code suggestions and completions with tools like GitHub copilot helping developers. Next, language translation, breaking down language barriers in real time through advanced translation models. Then personalized healthcare, enhancing medical treatments by tailoring recommendations based on individuals patient data. And finally, marketing and optimization. It helps establish marketing strategies in businesses. And these capabilities make generative AI a versatile technology that is reshaping industries. So let's examine some real world application and see how they work. Generative AI is at the core of many revolutionary applications today. We have text generation. From producing entire articles to summarizing content tools like GP4 are transforming content creation across industries. Next language translation. AI powered

### [1:06:00](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=3960s) What Is Generative AI?

translation tools are improving cross language communications by understanding context. Then writing assistance. Grammarly are similar tools refine grammar, tone and clarity, assisting professionals and student alike. Next we have business AI models are making business insights accessible, automating support and enhancing decision making. Next music generation AI is now venturing into creativity, composing unique tracks and musical elements. Finally, machine learning models platforms like H2O. AI AI are giving access to machine learning models allowing users without deep expertise to create powerful models. Now that we understood the potential of generative AI and now let's go deeper into how it actually works. So to understand how generative AI operates, let's break down the process. First define objective. So start with a clear goal whether it's generating text, creating an image or assisting with code. Next gather and preprocess data. So collect and prepare data ensuring it's clean and structured with the model. Then choose appropriate model. So select or design a model tailored to your needs. Sometimes building on pre-existing models. Next train the model. Feed data into the model so it can learn patterns and build into knowledge base. Next evaluate and refine. So fine-tune the model as needed to optimize its efficiency and accuracy. Then test and validate. run test to measure performance and accuracy ensuring it align with the objectives and finally deploy and iterate. So once ready deploy the model and continue refining it with user feedback and data and this cycle ensures generative AI models stay relevant and improve over time. Now moving on to the examples of generative AI tools. So many generative AI tools are available today each with its own specialtity. Tools like GitHub copilot for coding, DALI 3 for image generation and advanced language models like GPT are among the top layers. And if you're curious to explore these tools further, check out the video link in the description covering popular generative AI examples. Now let's look at the growing presence of generative AI across different sectors. First in healthcare, generative AI is projected to reach a $17. 2 billion market by 2032, transforming clinical applications and healthcare systems. Next, education. The AIdriven education market is expanding, especially for students, teachers, and administrators, making personalized learning more accessible. And then workspace. Generative AI adaption is rising with the highest impact in marketing and tech industries followed by sectors like consulting and healthcare. Each graph shows a significant trend as generative AI is becoming integral in diverse fields and reshaping workplace roles. As we look ahead, generative AI is set to transform even more areas. And here are some key impacts we can expect in AIdriven creativity. From art to music, AI will open new creative horizons. Next, AI personalizations. Tailored user experience will become the norm. Then, real time generation. Realtime content generation will improve virtual assistance and automated responses. Next, AI in architecture. AI will assist architects in design and material optimization. Then, human AI collaboration. As AI matures, humans and AI will work in tandemss to maximize productivity and innovation. Then advanced AI models. We will see even more sophisticated models pushing the boundaries of what AI can achieve. Generative AI's future is promising. So with potential benefits that will enhance lives and drive economic growth. Now, are you guys ready to get hands-on? Stick around for a mini LLM project where we will build a YouTube video summarizer. We will show you how to extract video transcript and then we will use an LLM to generate summaries and build a userfriendly interface using streamlit. Are you excited? So now let's jump in. So first things first, we need a solid environment for our project. First open your terminal and let's create a new cond environment to keep everything organized. Let's run the code using VS Code. You can also use other code editors such as PyCharm, but let's use VS Code for now. Now in the terminal, let's type the command for setting up the environment in your editor. For that, run this command. Just type create - P virtual environment which is venv python and equal to we are using 3. 10 1 0 which is the Python version and give hyphen Y. Here the hyphen P V and V specifies the path and the environment name while the hyphen Y skips prompt for a smoother install. So now while that's setting up let's create a few essential files. So first we will create a ENV file for our API keys and environment variables. Next we will create a requirements. txt file for the libraries we will need. such as YouTube transcript API to extract transcript from YouTube and stream lit for our front end. Then Google generative for accessing the Google Gemini API. Also the Python env for handling environment variables and part lip for the better part management. Now with our files in place, let's set up the Google seminar API access. Now head over to the makers google. com and as you can see on the screen in the top left corner you will get the API key interface and once you click that it will redirect us to the API key interface. Here you can see a button which is create API key. So simply click on that and select your model and press create API key in the existing project and then your API key will be generated. Now copy your API key and once you have got the API key open your environment file and add it there. So for that let's create a variable Google API key over here we will paste the API key. So once that is done back in the terminal activate your new environment with the command activate virtual environment backlash which is venv since I've already installed cond in my system it's not taking much time so while you are installing it might take little time now let us install requirements txt terminal and for that the command is pip install - are requirements. TXT. So once you enter the files present in the requirements txt will get installed. Awesome. So now let's move to the main code setup. So for that open app. py and start with imports. First let us import some important libraries. So for that import stream list as SD from env import load env. And next load env which will load environment variables. And next let's import google. generative as genai and also import os. and from YouTube transcript API, import YouTube transcript API. So let's go to the YouTube transcript API and check if you're doing it correctly because in some system it doesn't work. So you can simply go here and copy this command and paste it into your terminal. Now let's move on to configure the API key. So for that just type genai dot configure and inside the bracket just type API key equal to OS dot get env and inside the bracket let's add the Google API key here. Now to prompt a model we will use a template like this. So simply type please summarize this YouTube video transcript in 250 words or less highlighting key points. Now moving on to extracting YouTube transcript. So for that to fetch the transcript we will create a function. So here in this code as you can see on the screen the function extract transcript details takes the YouTube URL as an input and retrieves the transcript of the video if available. First, it extracts the video ID from the URL by splitting the URL at the equal sign and taking the second part and using YouTube transcript API dot get transcript video ID, it fetches the transcript data for the video ID and it then combines the text from each part of the transcript into a single string and if there is an error like example no transcript available, it prints the error and returns none. So this function effectively converts a YouTube video URL into its textual transcript. Next, let's write a function to generate the summary. Here the generate Gemini content function generates the content based on the combination of transcript text and a prompt. So it first initializes a generative AI model called Gemini and then the model creates a content processing the combined input of prompt and transcript text and finally it returns the generated text from the model's response. So this function effectively uses the Gemini model to produce AI generated content based on the specific text inputs. Now let's move on to building the front end. Now to connect it all with streaml first add a button label get detailed notes. So when clicked it prompts the user to enter a YouTube link in a text input field and it uses the extract transcript details YouTube link to get video transcript. So if the transcript is successfully retrieved, it generates a detailed summary with generate Gemini content transcript text or prompt. So finally it displays the summary under the headings detailed summary. Now let's give a finishing touch to our app. So for that let's make it look great. You can add headers and also you can give a footer and maybe you can also add a YouTube icon to it and even customize colors and incorporating thumbnails if the link is valid will create an even richer experience. Now let us test the app. So finally let's run a command streamlit run app. py. As you can see on the screen our app is successfully running. And now let's check if it's running fine. So for that let's open our YouTube and here select a video that you want a summary of. So copy the link paste a YouTube link into the app and hit generate. As you can see on the screen it is running and in just a second you will get the summary of the video. It has given us Also, there is no limit to the app and it will consider even the longer videos. And that's it. With just a few lines of code, we created a powerful YouTube summarizer using Streamlit and Google Gemini. If you enjoyed building this, stay tuned for more AI projects. So, what exactly are these LLMs and why are they so powerful? An LLM or large language models analyzes and understands natural language using machine learning. Examples include OpenAI's GPT, Google Spal and Metas Lama. These models drive applications such as chatbots, language translation and more by learning from extensive data to predict and generate text sequences. But before this, there was a very famous term called language model. A language model is a machine learning model that uses probability statistics and mathematics to predict the next sequence of words. Suppose you have a sentence like I have a boy who is my dash. Here if we ask a language model to predict the next word, it considers the context provided by the words before the blank. Based on common usage patterns from its training data, it may predict words like boyfriend, brother, or friend which fit naturally. However, it's less likely to predict colleague or sibling as those words may not commonly follow this type of phrases. So, this process shows how language models predict text by calculating probabilities for each possible word based on their likelihood in context. So, when a language model is trained on massive amounts of diverse text, it gains a wider vocabulary and more understanding of language, enabling it to make more accurate predictions. For example, if we give it a phrase like you are a dash to me, a model trained on extensive data might suggest various fitting words for example friend, inspiration or anything else. So based on the sentiment or context it has learned from the data. Now here reinforcement learning is used to improve the model's responses over time. By giving feedbacks be it positive or negative on the responses we help the model learn which type of responses are preferred in specific context. For example, if the model frequently misinterprets the tone or intent, the reinforcement learning helps adjust its productions to be more contextually appropriate and aligned with the intended meaning. But what do these models look like under the hood? Well, LLMs are built on neural networks composed of input, hidden, and output layers. The hidden layer process information to learn complex patterns and more layers means the model can capture deeper insights. This structure allows LLM to perform task from generating text to complex code completions. Now, how do these layers interact and function in real time? Now, LLM is based on the transformer and a transformer uses deep learning to process any information coming to it. Now, let me tell you a story of three friends. Imagine we have three characters. First is our friend. The next character is Minion Bob and the third character is Gru. So, our friend asks Bob, "What's the price of the jet? It must be $50,000. " Minion Bob isn't sure. So he goes to Gru and asks, "Is the jet $50,000? " Grrew replies, "No, it's $70,000. " In this back and forth, minion Bob is like the neural network layer trying to

### [1:21:18](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=4878s) What Is AI Ethics

make an accurate guess. So each time he goes back to group, like receiving more data or feedback, he gets corrected if his guess is wrong, leading him to refine his response. Now after the first check, Min and Bob returns to our friend saying, "I guess it's more than $60,000. " Our friend assumes it might be around $65,000. And sends Bob back to Gru to verify. Again, Gru corrects him. No, it's actually $70,000. So this process repeats with Bob adjusting his guess each time. Eventually, he learns that the correct answer is $70,000 and updates his knowledge. So just like Minion Bob, neural networks make initial guesses based on available information with each feedback loop like Bob going back to group the model's hidden layers adjust the parameters to refine its guesses ultimately arriving at the most accurate prediction possible. So after getting corrected multiple times, Minion Bob's guesses improve until he knows the price is $70,000. Similarly, in a neural network, gradually learning the correct answer through training. So once the network learns it can give accurate answers in future cases without checking every time. Now let us move on to understand how LLMs work. LLMs begin the collection of data sets then tokenize text and break it into a manageable pieces. Using a transformer architecture they process the data sequence all at once leveraging vast training data. LLMs contain millions of learned parameters that predict the text tokens and generate coherent outputs. Models often undergo pre-training for general knowledge and fine-tuning for specific task. So now let's see some practical uses of LLMs. LLMs power content generation, creating anything from articles to code. They excel in language translation, enhanced search engines, personalized recommendation, code development assistance and sentiment analysis which also owe much to LLM's predictive capabilities. So guys, are you ready to use all that knowledge in coding and witness how these LLMs come together to drive innovation? Whether through developing applications, analyzing data or building smart assistance, the gear of technology keep turning to unlock AI's full potential. So now let us look at our problem statement. So one of the difficulties in the healthcare industry is effectively evaluating medical pictures such as MRIs, CT scans and X-rays in order to identify anomalies and illnesses. This procedure takes a lot of time and calls from specialized understanding. Automated methods must be developed to help medical personnel recognize possible health problems in medical imaging. In order to provide better patient care, a system that integrates cutting edge machine learning models with image analysis can greatly help in the early detection of diseases including cancer, infections, and other illnesses. So, the method uses generative AI to evaluate medical photos and generate a thorough diagnosis report based on the findings. This technology allows users to upload medical images which the AI model then processes. Now let us build our project on a medical image analysis application using streamlit Python and an LLM of Google Gemini AI. So this app helps healthcare professional analyze medical images such as X-rays, MRIs and CT scans to detect anomalies and diseases. First let's import the necessary libraries. So first import streamllet as ST. So if this is not working or showing an error then open the terminal and write pip install streamllet and from path lib import path. Next import google dot generative AI as gen AI. So we are importing streamlit for the app interface and path from path lip for handling file paths and Google generative AI which allows us to interact with the Gemini AI model. Next we will configure Google's Gemini API by setting up our API key. So this will allow us to connect to the AI model and generate insights from medical images. So before proceeding let's get our API key and we will go to the Google to generate an API key. So on your left there is an API key option and after clicking you will get the create API option. So just select your model and create your API key. So as you can see the screen just copy this API key and go back to terminal. So now let's configure our model. So just type gel AI dot configure and inside the bracket give API key equal to and over here paste the key. Now we set up the system prompt which defines the role of the AI model. So the prompt specifies that our AI is a medical image analysis system capable of detecting diseases like cancer, cardiovascular issues, neurological conditions and more. So guys I have already researched the prompts and written here. So basically the system prompt should be inside the triple quotes. So this prompt guides the model to analyze medical images for conditions such as cancer, fractures, infections and more making it a valuable tool for healthcare professionals. Now let's configure the model settings for generating responses. We define parameters like temperature and top K to control the creativity of the model's output. First type generation_config equal to and inside the double quote we will give temperature which is 1 then top_p which is 0. 95 next top_k 40 then max output tokens which is 8192 two next response_m type which is of text or plane. So over here the temperature one that controls randomness a value of one given balanced output diversity. Next top P 0. 95 uses nucleus sampling selecting tokens from the top 25% cumulative probability distribution for diverse responses. Next, the top K 40 which limits token selection to the top 40 tokens based on the probability. Narrowing possible outputs to high probability tokens. Next, max output token. This setting allows for longer responses by limiting the maximum length of the generated text to 8,192 tokens. And then we have response m type which specifies the format of the output as plain text. So for more information read the Google Gemini documentation. Next we will also configure safety settings to ensure that the model doesn't generate harmful content. So for example we block categories like harassment, hate speech and sexual explicit content. Here we are using two things. First categories and then the threshold. Then copy this four times like harassment, hate speech and sexual explicit content. Now let's set up the layout for our streamlit application. So for that we will configure the title and the layout of the page and even add a logo to make the interface more user friendly. So first type st dot set page_config and inside the bracket let's give page title equal to and inside the double quotes we will give diagnostic analytics

### [1:29:55](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=5395s) What is Responsible AI

comma page icon equal to robot. Now let us type column 1 comma column 2 comma column 3 is equal to st dot columns and inside the bracket give 1a 2a 1 1 comma 2a 1 next with column 2 so I'll be using edureka and medical images so this will show you how to set up images using streaml now type st dot image and give a bracket and inside the double quotes let's type edurea dotpng and give a comma and give width is equal to 200. Now let us copy and paste it for medical. So let's type medical. png. Here we are using streamllets column to center the logo and title and this makes the app look professional and visually appealing. Next, let's allow the user to upload medical images for analysis. So, we use Streamlits file uploader widget to accept image file in PNG, JPG or JPEG formats. For that, let's type upload file equal to ST dot file uploader and inside the bracket inside the double quotes, let's type please upload the medical images for analysis. Comma type is equal to so basically the image and inside the bracket inside the double quotes let's give PNG comma jpg and jpeg next let us type submit button equal to st dot bututton and inside the bracket let's give generate image analysis is so here when the user uploads a file and click the generate image analysis button the model process the image and prepare it for analysis. So once the user submit the image we send it to the AI model for analysis and then the model generates a response based on the prompt and image which we then display in the app. So here as you can see the screen we have another function. So the if submit button which runs the code when the submit button is pressed. Next the image data is equal to upload file. get value. This actually gets the raw image data from the uploaded file. And next we have the image parts where it creates a list with the image data in a structured format. Then we have the prompt parts. So this combines the image data and a text prompt for the model. So this part of the code actually sends the image and text prompt to the model to generate a response. And then we have the st. right which displays the model's responses in the app. So here we use the image data and system prompts to generate content with the Gemini AI model. The result is displayed as a detailed report with insights about the medical image. Now it's time to test the code. So open the terminal and type streamllet run main. py. So once you enter it will redirect you to our model interface. And there you go. So the model is ready. So here's a live demo of the app. We will upload a sample image and the app will analyze it and provide a detailed diagnosis based on the AI models inside. So this is how we use streamlit and Google's Gemini AI model to create a medical image analysis app. So this app can help medical practitioners by offering precise and thorough analysis of medical photos. Now it is the time for testing. So let's take one image of any disease and test it. So upload the image from your computer. Then we will select an image and press the generate button. So as you can see it's running. So it generates fabulous response and can help doctors in assisting their patients saving time and money. So this is how we built a realtime medical diagnostic helper using Streamlit Python and Google Gemini AI. Recently, DeepSync shocked the community with its groundbreaking AI release, sparking fierce debates about innovation, performance, and cost efficiency. And in response, Open AI has made a bold strategic move. They have launched the new 03 mini model, a master stroke aimed squarely at overcoming the challenges posed by Deep Six's latest breakthrough. Launched on January 31, 2025, this model represent a huge leap forward in efficiency and performance, especially in science, math, and coding. Previewed back in December 2024, the O3 Mini is designed to deliver exceptional reasoning while keeping cost low. Now, let us have a look at key features. Open AI 03 Mini is the newest entry in the reasoning series, and it's built for both Chad GPD and API integrations. And here's what sets it apart. First is the optimized STEM reasoning. Tailored for technical domains, O3 Mini is engineered to excel in math, coding, and science, ensuring reliable performance on even the toughest challenges. Next is the developer ready features. It introduces key developer functionalities such as function calling, structured outputs, and developer messages. Next is a flexible reasoning effort. Here developers can choose from three reasoning modes low, medium and high. And these modes allow the model to think harder on complex task or prioritize speed when needed. Additionally, while 03 Mini maintains the low cost and reduce latency of its predecessor, Open AI 01 Mini it rises the bar in terms of performance and developer versatility. Note that although 03 Mini doesn't support vision capabilities, it's the perfect choice for pure reasoning task. Now let's take a closer look at how O3 Mini performs on a variety of challenging benchmarks. First, in terms of competition math AIM 2024 with a high reasoning effort, O3 Mini achieves a impressive accuracy of 87. 3%. Or performing previous models and setting a new standard for math problem solving. Next is the PhD level science JPQA diamond. Here the model achieves 79. 7% accuracy with a high level of reasoning effort showcasing its aptitude for tracking complex scientific queries. Next, Frontier Math. When prompted to use a Python tool, O3 Mini with a high effort successfully solves over 32% of problems on the first attempt, including more than 28% of the most challenging T3 problems. Next is the computation code forces. The model attains an EL war rating of 2130 at high reasoning effort, marking significant improvements in computative programming task. Next is the software engineering sweep bench verified achieving a top accuracy of 49. 3%. O3 mini stands as the highest performing release model on sweep bench verified task. Next is the live bench coding with medium reasoning effort. O3 Mini not only surpasses its predecessor but also further extends its lead when switched to high reasoning effort underlining its coding efficiency. Next, in terms of general knowledge, the latest result shows that O3 mini leaves 01 mini in the dust across a board range of knowledge task. Just look at these numbers. Whether it's MMLU, map pass at one or factbased QA, O3 Mini consistently delivers higher accuracy. And this proves that the newer model isn't just good at reasoning, but it's also expanding its factual recall and problem

### [1:38:09](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=5889s) Artificial Intelligence with Python

solving skills. Next is the human preference evaluation. According to external expert testers, 03 Mini isn't just beating O1 Mini on paper. It's also delivering clearer and more accurate answers in the real world. In fact, testers preferred O3 Mini's responses 56% of the time and saw a 39% reduction in major errors on tough real world questions. That's a massive leap in reliability, especially for STEM related task. Across these benchmarks, evaluations by expert testers showed that O3 minutes responses were preferred 56% of the time and featured a 39% reduction in major errors on difficult real world questions. Next, let us talk about the speed, latency, and efficiency. Speed is a crucial factor in today's AI applications, and with 03 Mini, you can expect a significant boost in performance. First, in terms of response time, in AB test, O3 Mini delivers responses 24% faster than its predecessor, averaging 70. 7 seconds compared to 10. 16 seconds. Next, latency. The model reaches the first token 2500 milliseconds faster than 01 mini, ensuring a more responsive user experience. These improvements make the O3 mini an excellent choice for developers who require both efficiency and precision. Next is the safety and alignment. Safety has always been a top priority for OpenAI. With O3 Mini, one of the key innovations is the use of deliberative alignment, a technique that trains the model to evaluate human written safety specifications before generating responses. And this approach not only helps O3 Mini handle challenging safety and jailbreak evaluations more efficiently than GPT40, but it also ensures that responses remain both accurate and secure. Extensive testing including disallowed content and jailbreak evaluations confirm that 03 Mini meets rigorous safety standards. All right, let's move on and see the availability. And what's next? Well, Open AIO3 Mini is available right now across multiple platforms for charge users. Plus, Steam and Pro users already have access and free plan users are now able to try O3 Mini by selecting Reason in the message composer. Next, the API integration. The model is rolling out in the chat completions API, assistance API and batch API for select developers in API usage tier 3 to5. Next is the enterprise rollout. Enterprise level access will be available in Feb with O3 mini replacing 01 mini in the model picker. This new version offers higher rate limits and lower latency. Now looking ahead, the release of O3 Mini is another important step in Open AI's mission to deliver cost-effective highquality intelligence that scales across diverse technical domains. With per token pricing reduced by 95% since the launch of GPT4, Open AI continues to democratize access to advanced AI technology. And with this, we have come to the end of this video on Open AI3 Mini model. We have seen how open AI 03 Mini delivers lightning fast responses and robust STEM reasoning and directly counters the recent DeepSync AI release. A move that many are calling nothing short of a master stroke. So here's the big question. Is OpenAI's release of 03 Mini the master move that finally overcomes the challenge posed by Deep Seek's breakthrough? What do you think? Will this decisive step secure OpenAI's lead or is there more to come in the global AI race? Let us know in the comments. In today's ever evolving digital era, we have to agree that AI has become an integral part of our daily lives. For example, you wake up and ask your virtual assistant to schedule your day. You get in your car and a navigation app suggests the best route based on the real-time traffic. At work, you use AI tools like JPD, cloud AI or Google Gemini to help you with research, writing and analysis. AI is truly embedded in so many aspects of our modern existence. But have you ever stopped to think about the impact that AI is having? And by impact, I mean directly or indirectly AI is taking decisions that will shape your day-to-day life. Imagine going for a job interview and an AI system scores your resume, determining whether you will get shortlisted or not. In healthcare, AI is increasingly being used for diagnosis and treatment planning. In finance, AI algorithms analyze data and make trading decisions. In criminal justice, AI tools are used for predictive policing and risk assessment. But do you really think that while making all these decisions, AI will remain accurate and unbiased? This question is what AI ethics is all about and in this video we will be discussing the answer to this question and understand what exactly is AI ethics. AI ethics refers to the principles and practices that ensure artificial intelligence systems are developed and used in a way that is ethical, unbiased, transparent and accountable. Now AI ethics is a subset of responsible AI that refers to the principles and practices around developing and deploying artificial intelligence in a way that is ethical, trustworthy and benefits humanity as a whole. Today AI systems might seem to be unbiased but if not designed carefully they can actually continue and amplify societal biases. The best example for this is Google Gemini. Gemini made some really bad mistakes in the past. It showed pictures of German soldiers from World War II as the people of different races, which is not accurate at all. It also made up stories about historical figures like founding fathers of America and the pop, showing them as different ethnicities than they really were. Even worse, Gemini said something crazy. It said that Elon Musk posting memes online was more harmful than Adolf Hitler. In India, it falsely claimed that PM Modi enforced faces policies which is not true at all. However, when asked about other world leaders like the presidents of Ukraine, its responses were not as controversial. These big messups by an esteemed platform like Google Gemini shows why AI ethics is so important. Now the question comes then how can we build trustworthy AIs? So let's understand that. In order to build trustworthy AIs, there are five pillars to keep in mind. Fairness, explanability, robustness, transparency and privacy protection. Now let's understand them one by one. Very firstly we have fairness. Coming to fairness, it is basically ensuring that AI system treat everyone fairly and do not discriminate against underrepresented or disadvantaged groups. For example, you can take an AI powered recruitment system that ranks job candidates without discrimination based on race, gender, age or other protected characteristics by using carefully curated training data and bias testing. Then comes explanability. It means being able to understand and explain how the AI arrived at it decisions or outputs based on the training data and machine learning methods used. For example, a loan application AI not only outputs a decision but can show the key factors like income, credit score, depth, etc. that drove its approval or denial recommendation. Then we have robustness. AI systems needs to be secure, reliable, and safe, preventing them from being hacked, manipulated, or behaving in unintended ways that could harm individuals. For example, a self-driving car AI built with safety constants and backups to prevent dangerous behavior if the system malfunctions or is hacked. Then comes transparency. The use of AI to drive decisions that impact people should be proactively disclosed. And the details provided about the AI's purpose, training, and functionality should always be disclosed and always be transparent. For example, a healthcare provider disclosing upfront that an AI system helps analyze medical scans and provides information about its capabilities and limitations. Then lastly, we have privacy protection. Any personal data used to train AI models must have robust data governance, deidentification, and protection of individuals privacy. An example to this is an AI virtual assistant that does not store or use personal conversation data for training unless user explicitly opt in to share their data. If AI systems do not follow these rules, they might keep unfairness going, make decisions without anyone checking if they are right, accidentally hurt people, invade privacy, and make people lose faith in them. The example shows why it is important to use AI in an fair and responsible way that helps everyone out there. Now, after checking the pillars, how do enterprises determine if their solution is at risk of crossing some sort of ethical boundaries? Let's check that out. Imagine you work for a large e-commerce company that uses AI to recommend products to customers based on their browsing and purchase history. While this AI system is designed to enhance the customer experience and increase sales, there are potential ethical concerns that needs to be addressed. As the ethical lead at your company, you realize that it's crucial to establish clear guidelines and principles to ensure your AI solution operates within ethical boundaries. You propose the three following core principles for that. Firstly, AI should augment human intelligence, not replace it. The AI recommendations should assist customers in making informed decisions and manipulate or deceive them. Then we have data and insights belong to their creators. Customer data should be used transparently and with consent respecting privacy and ownership. Lastly, we have solutions must be transparent and explainable. There should be visibility into how the EI systems is trained, what data is used, and how it generates recommendations. To ensure these principles are followed, you suggest implementing a mapping exercise. This involves listing all the features and intended benefits of the AI's recommendation system and then identifying potential negative consequences or ethical risk associated with each feature. For example, while the AI systems ability to personalize recommendations based on browsing history can enhance custom experience, it could also lead to privacy concerns or promote biases if the training data is not diverse enough. After mapping out benefits and risk, your work with your team is to implement specific rules the AI system must follow. This might include rules such as customer data will not be sold to third party advertisers. The AI system will be trained on a diverse and inclusive data set to avoid biases. Customers will have the option to opt out of personalized recommendations or access explanations for why certain products are recommended. To further mitigate ethical risk, you leverage open-source tools like interpret ML or AI fairness 360, which helps detect and mitigate bias in machine learning models. You also explore tools to ensure compliance and privacy regulations and to measure the uncertaintity or confidence levels of AI systems recommendations. Through this proactive approach involving clear principles, risk assessment, rules, and monitoring tools, your e-commerce company can build trust with customers and stakeholders, ensuring that the AI solution enhances the customer experience while operating within the ethical boundaries. According to exploring topics around 77% of companies are either using or exploring the use of AI which very evidently shows that ethics in AI is not only impacting the individuals but even the organizations that are massively dependent on AI tools and now imagine what could be the consequences if the AI models that the companies are using lacks its ethical configurations. the whole industry will be hugely impacted and this creates the importance of learning AI ethics. In day-today life, AI has become an important part of lives. As humans, we have some responsibilities to fulfill. It is the same for AI where AI follows some guidelines, principles and policies which are responsible for the organization so it can reflect its values and mission. We humans may forget our responsibilities but AI doesn't. It is much more accountable than humans to make sure that it is more responsible than humans. We create principles and policies that are accurate and friendly towards human. So let me ask you a question whether you need just an AI or a responsible AI. Hi everyone and I welcome you all to this video on what is responsible AI. Today we'll learn some of the use cases and other global aspects that are forecasted. So let us see what is responsible AI. Responsible AI is like teaching a robot good manners. It means making sure that when we build and use AI, it is fair to everyone and does not cause harm. Imagine if a robot could learn from people's action like a kid learning from grown-ups. Responsible AI teaches robots to be accountable, fair, and respectful just like we teach kids to be polite. It's about making sure AI does not discriminate, respects privacy, and follows rules like not sharing secrets. Just like we want our kids to grow up to be a good and responsible citizen, we want AI digital citizen too. So let us see what are the importance of responsible AI and why is AI important. Responsible AI has many important aspects. Some of them will help responsible AI build trust with customers and stakeholders. It also improves operational and communication efficiency and can help drive revenue. Responsible AI can reduce issues such as AI being biased or unsafe and ensures that it is designed, deployed and used ethically and legally. It also ensures consumers privacy, discrimination and harm prevention. The main goal of responsible AI is to employ AI in a safe, trustworthy and ethical fashion way. So let us see few common principles of responsible AI that every organization follows such as fairness, reliability and safety, privacy and security, accountability and transparency. Fairness AI system should treat all people fairly. They shouldn't be biased by giving different answers for different organizations and should be accurate with the information they provide. If the AI is not fair, it will have trust issues with the consumers. Reliability and safety. AI system should perform reliably, consistently and safely under normal circumstances and in unexpected conditions. To make AI reliable and safe, it is important to think about how we could go wrong, how the AI might react, how people can fix it quickly and always prioritize keeping humans safe. Privacy and security. AI system should be secure, respect privacy and resist attack. Just like we have rules about how we can use someone's personal stuff, AI system have rules about how they can use people's personal information. These rules are there to ensure that your information stays safe and isn't misused. Accountability. Accountability in responsible AI ensures that there is a clear line of responsibility for the deployment, development and outcome of AI system. The more advanced and independent AI system becomes, the more accountable the organization behind them is to ensuring that they are used ethically and responsibly. Especially when people's safety is at stake. Transparency. AI system should be understandable. Achieving transparency ensures that AI processes and decisions are transparent. So it is clear how and why a decision was made. So let us see how a responsible AI is implemented and how does it work. There are few steps to implement responsible AI. Step one, defining goals. We should make sure what task the AI should perform according to the organization and make sure that it meets their expectations and help them achieve their goals. Step two, collecting data. This step collects the requirement and information for the AI and feeds it with the data. Step three, selection of tools. Different tools are used so that the AI can enhance its capability enabling it to perform specific task more effectively, accurately and efficiently. Step four, algorithm creation or model selection. Creating models for responsible AI means designing and selecting system that are fair, ethical and do not cause harm. Step five, training the algorithm or model. In this step, we teach the system to make fair and ethical decisions without causing harm. Step six, evaluation of AI system. It checks to ensure that the system makes fair and ethical decisions. Step seven, deployment of AI solution. This step ensures that the AI system operates fairly and ethically in real world situation. So let us see how a responsible AI works. Responsible AI works by different principles and rules being fair, transparent, safe, ethical while respecting people's privacy. Different companies set some standards that they follow to make AI more responsible. The AI learns from the pattern and features of data so that it can be more responsible and not biased when giving information. So let us see what is the difference between responsible AI and ethical AI. Responsible AI. It aims to create AI for safe, ethical and transparent interaction with users. Whereas ethical AI aims to create AI that makes morally sound decisions and treats all users fairly. Responsible AI can be applied to various sectors from healthcare to finance. Whereas ethical AI has values such as fairness, accountability, and transparency. Responsible AI strives for a balanced experience that is both ethical and efficient. Whereas ethical AI prioritizes a fair and unbiased experience potentially at the expense of speed. Responsible AI involves a multid-disiplinary approach including legal experts for governments. It is same for ethical AI where it also requires a multiddisciplinary team that focuses more on ethics and moral awareness. So let us see some of the example where a responsible AI are used. Microsoft and IBM have each developed their own set of rules and guidelines to make sure that the AI technologies they use and create are responsible and fair. Microsoft it has its AI committee and office of responsible AI which sets companies wide rules for responsible AI. They provide guidelines for how humans and AI should interact, how AI should be designed inclusively and how to ensure fairness in AI system. They also have templates for things like data sheets and guidelines for AI security. So let us see how the IBM has used a responsible AI. IBM has its own ethical board focused on AI. They work on trust, transparency and ethical use of AI. They also provide resource for everyday ethics in AI supports opensource AI project and do research to make AI more trustworthy. Now to conclude this video I would like to say responsible AI make sure that AI systems are fair, safe and transparent. Many companies have created rules and guidelines to ensure that AI is used ethically and does not discriminate. They do this by setting standards for AI. So why exactly are we using Python for artificial intelligence? Why aren't we using any other language? Right? Now there are a couple of reasons as to why Python is so popular when it comes to AI, machine learning and deep learning. The first reason is less coding is required. Now artificial intelligence has a lot of algorithms. If you have to implement AI in any code or in any problem, then there are going to be tons and tons of machine learning algorithms involved, deep learning algorithms involved. Right now, testing all of these can become a very tiresome task. That's where Python usually comes in handy. Now, the language has something known as check as you code methodology, which eases the process of testing, right? You can check your program as you code it. Basically, as you're typing each sentence, your errors or your any sort of mistakes in your code will be given to you, right? So, testing becomes much easier when it comes to Python. The next important reason why we're choosing Python is it has support for pre-built libraries. Right? Python is very convenient for AI developers because all of the algorithms, machine learning algorithms, and deep learning algorithms are already predefined in libraries, right? So you don't have to actually sit down and code each and every algorithm that will take a lot of time right that's a very timeconuming task and thanks to Python you don't have to do that because they have libraries and packages that have all the algorithms built in them right so if you want to run any algorithm all you have to do is you have to call the function and load the library that's all it's as simple as that now the next reason is ease of learning so guys Python is actually the most simplest programming language right if you ask me I think it is it's the most easiest programming language. It's very similar to English language right? If you read a couple of lines in Python you'll understand what exactly the code is doing. It has a very simple syntax and this simple syntax can be implemented to solve simple problems like addition of two strings and it can also be used to solve complex problems like building machine learning models and deep learning models. So ease of learning is a major factor when it comes to why Python is chosen for artificial intelligence. Right? Next we have platform independent. So a good thing about Python is that you can get your project running on different operating systems. Right? And what happens when you transfer your code from one operating system to another operating system is we find a lot of dependency issues. To solve that, Python has a couple of packages such as there is a package known as PI installer. Right? This PI installer will take care of all the dependency issues when you're transferring your code from one platform to the other platform. So all of the support is provided by Python. The last reason is massive community support. This is a very important point because it is important that you have a large community that will help you out with any errors or with any sort of problems in your code. Right? So Python has several communities and several forums and groups on Facebook. So if you have any doubts regarding any error, you can just post those errors in these groups and you'll have like a bunch of people helping you out. Right? So guys, these are a couple of reasons as to why Python is chosen for artificial intelligence. It's actually considered the most popular and the most used language for data science, AI, machine learning, and deep learning. To prove that to you, here is a stat from Stack Overflow. Stack Overflow recently stated that Python is the fastest growing programming language. If you look at the graph, you can see that it has taken over JavaScript and Java and C#Ash, C++ and PHP, right? So, Python is actually growing at an exponential rate especially when it comes to data science and artificial intelligence. A lot of developers are very comfortable with the Python language because you know it's a general purpose language first of all. So most of the developers are already aware of Python and then using the same language in order to solve complex problems like artificial intelligence, machine learning and deep learning is something every developer wants. Right? They want a simple language in order to code all the complex algorithms or the complex models. Right? So that's why Python is the best choice for artificial intelligence. For those of you who are not aware of Python programming and don't know much about Python, I'm going to leave a couple of links in the description box. Right? You can go through those links and study a little bit more about how Python works or how the coding part works. Right? I'm going to be focusing mainly on artificial intelligence and I'll be showing you a lot of demos. So those of you are not aware of Python, make sure you check the description box. Right. Next, I'm going to discuss the different Python packages for artificial intelligence. Now these are the packages that are specifically for machine learning, deep learning, natural language processing and so on. So let's take a look at all these packages. So first we have TensorFlow. If you are currently working on a machine learning project in Python, then you must have heard of this popular open-source library known as TensorFlow, right? This library was developed by Google in collaboration with Brain Team. TensorFlow is used in almost every Google application for machine learning. Now let me just discuss a few features of TensorFlow. It has a responsive construct meaning that with TensorFlow we can easily visualize each and every part of the graph which is not an option when you're using other packages such as NumPy or Scikit. Right? Another feature is that it's very flexible. Now one of the most important TensorFlow features is that it is flexible in operability meaning that it has modularity and the parts of which you want to make standalone. It offers you that option, right? It's very flexible in that way. It'll give you exactly what you want. Now, a good feature about TensorFlow is that you can train it on both CPU and GPU, right? So, for distributed computing, you can have both these options. Also, it supports parallel neural network training. So, TensorFlow offers pipelining in the sense that you can train multiple neural networks and multiple GPUs which makes the models very efficient on any large scale system. Right? So parallel neural network training is supported by TensorFlow, right? This is one of the most important features of TensorFlow. Apart from this, it has a very large community and needless to say, if it has been developed by Google, then there's already a large team of software engineers who work on stability improvements and all of that. Right? The next library I'm going to talk about is scikitlearn. Now, Cyclon is a Python library that is associated with NumPy and Sci. Right? That's why it has the name scikitlearn. Now this is considered to be one of the best uh libraries for working with complex data. And there are a lot of changes that are being made in this library. And one modification is the cross validation feature which provides the ability to use more than one metric. Right? Cross validation is one of the most important and one of the most easiest methods for checking the accuracy of a model. Right? So cross validation is being implemented in cyclon. And apart from that again there are large spread of algorithms that you can implement by using cycllearn right these include unsupervised learning algorithms starting from clustering factor analysis principal component analysis to all the unsupervised neural networks. Cycllearn is also very essential for feature extracting in images and text. So mainly scikitlearn is used for implementing all the standard machine learning and data mining tasks like reducing dimensionality, classification, regression, clustering and model selection. Next up we have numpy. Now numpy is considered as one of the most popular machine learning libraries in Python. Now let me tell you that TensorFlow and other libraries they make use of numpy internally for performing multiple operations on tensors. The most important feature of numpy is the array interface. It supports multi-dimensional arrays. Right? That's one of the most important features of numpy. Another feature is uh it makes complex mathematical implementations very simple. Right? It's mainly known for computing mathematical data. So, numpa is a package that you should be using for any sort of statistical analysis or data analysis that involves a lot of math. Apart from that, it makes coding very easy and grasping the concept is extremely easy with numpy. Now, numpy is mainly used for expressing images, sound waves and other mathematical computations. All right, moving on to our next library, we have Theiano. Theo is a computational framework which is used for computing multi-dimensional arrays. Right? Theo actually works very similar tensorflow. But the only drawback is that you can't fit Theiano into production environments. But apart from that, Theaniano allows you to define, optimize, and evaluate mathematical expressions that involve multi arrays. Right? This is another library that lets you implement multi-dimensional arrays. Features of Theo include tight integration with numpy. An advantage of Theaniano is that you can easily implement numpy adds in Theaniano, right? That's why there's a connection between Theaniano and NumPy because both of them effectively use multi-dimensional arrays. Transparent use of GPU. Now performing data inensive computations are much faster when it comes to ao because of its use of GPU. Right? Theo also lets you detect and diagnose multiple types of errors and any sort of ambiguity in the model. So guys, CNO was actually designed to handle the types of computations required for large neural network algorithms, right? It was mainly built for deep learning and neural networks. It was one of the first libraries of its kind and it is considered as an industry standard for deep learning research and development. Theo is being used in multiple neural networks projects and the popularity of Theiano is only going to grow with time. Right? A lot of people actually haven't heard of Ciano, but let me tell you that this is one of the best ways to implement deep learning and neural network models. Moving on, we have KAS. Now, KAS is considered to be the most popular Python package. It provides some of the best functionalities for compiling models, processing your data sets and visualizing graphs. It is also popular in the implementation of neural networks. Right? It is considered to be the simplest package with which you can implement neural networks. In fact, in our today's demo for deep learning, we'll be implementing KAS in order to understand how neural networks work. Few of the features of KAS include that it runs very smoothly on both CPU and GPU. It supports almost all the models of a neural network, right? From fully connected, convolutional, pooling, recurrent, embedding, all of these models are supported by KAS. And not only that, you can combine these models to build more complex models. KAS is completely Python based which makes it very easy to debug and explore. Right? Since Python has a huge community of followers, it's very simple in order to debug any sort of error that you find while implementing KAS. So the libraries that I discussed so far were dedicated to machine learning and deep learning. For natural language processing, we have the most famous library known as the natural language toolkit, which is an open-source Python library mainly used for natural language processing, text analysis, and text mining. The main features include that it studies and analyzes natural language text in order to draw useful information from all this natural language text. It performs text analysis and sentimental analysis by performing tasks such as stemming, leatization, tokenization, and so on. Now, don't worry if you don't know what any of those terms mean. I'll be discussing all of those terms with you by the end of today's session. So, guys, these were a couple of Python based libraries which are very essential for implementing machine learning and deep learning and artificial intelligence when you're using Python. Right? These libraries are perfect for implementing AI. So guys, if any of you have any doubts regarding the libraries or if you want to learn more about the libraries, I will leave a couple of links in the description box. You can go through those videos as well. So now let's move on to the main topic of discussion which is artificial intelligence. Now before we get started with the demand of artificial intelligence, let me tell you that AI was invented long ago. AI goes back to the 19th century. It was not something that was recently invented even though AI has recently gained a lot of popularity. We can say that in the past decade AI has gained the maximum popularity but it was actually invented in the 19th century. Now especially in the year 1950 there was somebody known as Alan Turing. I'm sure a lot of you have heard about the Turing test. The Turing test is basically used to determine whether or not a machine is artificially intelligent. meaning that whether a machine can think intelligently like a human being. Right? This was the first proposition and this was one of the most important breakthroughs in artificial intelligence. Right? Somebody known as Alan Turing, he published a landmark paper in which he speculated about the possibility of creating machines that think. Right? So the Turing test was the first serious proposal in the philosophy of artificial intelligence. This was done in 1950. Right? After this we had eras of AI. We had the game AI which was in 1951. Now since the emergence of AI in 1950s we have seen an exponential growth in its potential. Right? AI covers domains like machine learning, deep learning, neural networks, natural language processing, knowledge base and so on. It's also made its way into computer vision and image processing. But the question is if AI has been here for over half a century why has it suddenly gained so much importance right why are we talking about artificial intelligence now the main reasons for the vast popularity of AI are the following right the first reason is more computational power now AI requires a lot of computing power recently many advances have been made and complex deep learning models can be deployed and one of the greatest technology that made this possible are GPUs. Since the invention of GPUs, we can compute much more with our uh computers. Initially, we could barely process 1 GB of data, right? We only had hard disk to store additional memory and all of that. Now, our computers tons and tons of data. So, now we have more computational power, which is one of the main reasons behind why AI became so popular. So, by having more computational power, it becomes much easier to implement artificial intelligence. Next reason is more data. Now big data is one of the most important reasons behind the development of artificial intelligence. Now AI and data science and machine learning, deep learning, all of these processes are here only because we have a lot of data at present. Now the main idea behind all these technologies is to draw useful insights from data. Now since we start generating a lot of data, we need to find a method that can process this much data and draw useful insights from data such that it benefits an organization or it grows a business. That's why artificial intelligence and machine learning comes into the picture. Right? So more data led to the demand of artificial intelligence. Apart from this, we also have better algorithms now. Right? We have state-of-the-art algorithms. Most of them are based on the idea of neural networks and these are constantly getting better. Neural networks are actually one of the most significant discoveries in artificial intelligence because with neural networks you can take in thousand layers of input data, right? You can take in a lot of input data to perform computations. So through neural networks we actually able to solve a lot of problems including healthcare problems, fraud detection problems and so on. Another reason is broad investment. So our universities and governments and startups and any tech giants like Google, Amazon and Facebook, they are all investing heavily in artificial intelligence which also led to the demand of AI. So AI is rapidly growing both as a field of study and also as an economy to the economy and I think this is the perfect time for you to get into the field of artificial intelligence because right now AI is in a really high demand. AI, machine learning, data science, all of this are of really high demand at present, right? So, this is the perfect time for you to get started with artificial intelligence. Now, let me tell you that the term artificial intelligence was first coined in the year 1956 by a scientist known as John McCarthy. Now, John McCarthy defined artificial intelligence as the science and engineering of making intelligent machines. Now, let me give you a descriptive definition of artificial intelligence. Artificial intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence such as visual perception, speech recognition, decision making, and translation between languages. In a sense, artificial intelligence is a technique of getting machines to work and behave like humans. In the recent past, AI has actually been able to accomplish this by creating machines and robots that have been used in a wide range of fields including healthcare, robotics, marketing, business analytics, and many more. Right? So, AI is actually a very vast field. It covers a lot of domains including machine learning, natural language processing, knowledge base, deep learning, computer vision, and expert systems. So, these are a few domains that AI covers. Now let's move on and discuss the different types of artificial intelligence. Now guys, AI is structured along three evolutionary stages. You can say that AI is developed along three evolutionary stages. We have something known as artificial narrow intelligence followed by artificial general intelligence. And finally we have artificial super intelligence. Artificial narrow intelligence which is also known as weak AI. It involves applying AI only to specific tasks. Many of the currently existing systems that uh claim to use artificial intelligence are actually operating as weak AI which is focused on a narrowly defined specific problem. Now take Alexa. Alexa is actually a good example of artificial narrow intelligence. It operates within a limited predefined range of functions. Right? There is no genuine intelligence or no self-awareness despite being a sophisticated example of VKI. Google search engine, Sophia and self-driving cars and even the famous Alph Go fall under the weak AI category. Then we have something known as artificial general intelligence. This is also known as strong AI and it involves machines that possess the ability to perform any intellectual task that a human being can. Now guys, machines don't possess humanlike abilities. They have a very strong processing unit that can perform highlevel computations, but they're not yet capable of thinking and reasoning like a human. And there are also many who question whether it would be desirable. For example, uh Steven Hawkings warned that strong AI would take off on its own and redesign itself at an everinccreasing rate. Humans who are limited by slow biological evolution couldn't compete and would be superseded. Right? So we have a lot of tech giants and a lot of geniuses who are actually worried if strong AI is ever implemented it might take over the world. Right? So guys let me tell you that strong AI is something that has not been implemented yet. We are only at the first stage of artificial intelligence which is artificial narrow intelligence also known as weak AI. Right? We haven't yet reached strong AI or artificial super intelligence. So artificial super intelligence is a term that refers to the time when the capabilities of a computer will surpass humans right ASI is currently seen as a hypothetical situation as depicted in movies and science fiction books where machines have taken over the world right so artificial super intelligence is something that is very far off but tech masterminds like Elon Musk believe that artificial super intelligence will take over the world by the year 2040 Right? There are a lot of people who are against the development of artificial general intelligence and artificial super intelligence. A lot of us believe that we should stick to weak AI and not move any further and risk the existence of human civilization. So guys, let me know your thoughts about artificial intelligence in the comment section. I'd love to know what you guys think about AI and whether you'all believe AI will take over the world or not. So now let's move on and talk about how artificial intelligence is different from machine learning and deep learning. A lot of people tend to assume that artificial intelligence, machine learning and deep learning are the same because they have common applications, right? For example, Siri is an application of AI, machine learning and deep learning. So how are these technologies related, right? Or how are they different from each other? Now, artificial intelligence is the science of getting machines to mimic the behavior of human beings. Machine learning is the subset of artificial intelligence that focuses on getting machines to make decisions by feeding them data. Deep learning on the other hand is a subset of machine learning that uses the concept of neural networks to solve complex problems. So to sum it up to you, artificial intelligence, machine learning and deep learning are heavily interconnected fields, right? Machine learning and deep learning aids artificial intelligence by providing a set of algorithms and neural networks to solve datadriven problems. However, AI is not restricted to only machine learning and deep learning, right? It covers a vast domain of fields which include natural language processing, object detection, computer vision, robotics, expert systems, and so on. Right? So AI is a very vast field. Guys, I hope I cleared the difference between AI, machine learning and deep learning. Also, a lot of you might be confused about data science. Data science is now an umbrella term, right? Data science basically means to derive useful insights from data. So data science actually uses AI, machine learning and deep learning, right? So it implements all of these three technologies in order to derive useful insights from data. Right? Now let's move on to the most interesting topic in artificial intelligence which is machine learning. Now guys, the term machine learning was first coined by a scientist known as Arthur Samuel in the year 1959. Looking back that year was probably the most significant in terms of technological advancements. In order to define machine learning, if you browse the internet for what is machine learning, you'll get at least 100 different definitions. In simple terms, machine learning is a subset of artificial intelligence which provides machines the ability to learn automatically and improve from experience without being explicitly programmed to do so. In a sense, it is the practice of getting machines to solve problems by gaining the ability to think. Now, the question here is can a machine think or can a machine make decisions? Well, if you feed a machine a good amount of data, it will learn how to interpret, process, and analyze this data by using something known as machine learning algorithms. To give you a basic idea of how the machine learning process works, look at the figure on this slide. A machine learning process always begins by feeding the machine lots and lots of data. Now, by using this data, the machine is trained to detect any hidden insights and trends in the data. These insights are then used to build a machine learning model by using a machine learning algorithm in order to solve a problem. The basic aim of machine learning is to solve a problem or find a solution by using data. Now moving ahead, I'll be discussing the machine learning process in depth. Right? So don't worry if you haven't got the exact idea of what machine learning is. Now the machine learning process involves uh building a predictive model that can be used to find a solution for a particular problem. A well- definfined machine learning process will have around seven steps. It always begins with defining the objective followed by data gathering or data collection. Then we have something known as preparing data which is also called data prep-processing. Then we have data exploration or exploratory data analysis. This is followed by building a machine learning model. Then we have model evaluation and finally predictions. This is how the process of machine learning works. To understand the machine learning process, let's assume that you've been given a problem that needs to be solved by using machine learning. Let's say that the problem is to predict the occurrence of rain in your local area by using machine learning. Now the first step is to define the objective of the problem. Right? At this step, we must understand what exactly needs to be predicted. In our case, the objective is to predict the possibility of rain by studying the weather conditions. So at this stage, it is essential to take mental notes on what kind of data can be used to solve this problem or the type of approach that you must follow to get to the solution. The questions you should be asking yourself is what are we trying to predict? Right? Here we're trying to predict whether it'll rain or not. Right? You need to understand what are the target features. Target features are basically the variable that you need to predict. Here we need to predict a variable that will show us whether it's going to rain tomorrow or not. Then you must also understand what kind of data you'll need to solve this problem. Apart from that you need to know what kind of problem you're facing. Is it a binary classification problem or is it a clustering problem? Now if you don't know what classification and clustering is don't worry. I'll be talking about all of these things in the upcoming slides. So your first step is to define the objective of your problem. You need to understand what exactly needs to be done here, right? How can you solve this problem? Moving on, your next step is to gather the data that you need. At this stage, you must be asking questions such as what kind of data is needed to solve this problem? Is the data available to me? And if it's not available, how can I get the data? Right? Once you know the type of data that is required, you must understand how you can derive this data. Data collection can be either done manually or it can be done by web scraping. But don't worry if you're a beginner and you're just looking to learn machine learning. You don't have to worry about getting the data. There are thousands of data resources on the web. You can just download the data set and you can get going. Coming back to the problem at hand, the data needed for weather forecasting includes measures such as humidity level, your temperature, the pressure, the locality, whether or not you live in a hill station and so on. Such data must be collected and it has to be stored for analysis. This is where you collect all the data. Now, moving on to step number three is data preparation. The data that you collected is almost never in the right format. All right. Even if you collect it from a internet resource, if you download it from some website, even then your data is not going to be clean, right? It's in the correct format. There's always going to be some sort of inconsistencies in your data. Inconsistencies include any missing values or any redundant variables, duplicate values. All of these are inconsistencies. Removing all of this is very essential because they might lead to any wrongful computation. Therefore at this stage you can scan the entire data set for any missing values and you have to fix them here itself. Now actually this is one of the most time consuming steps in a machine learning process. If you ask a data scientist which step he hates the most or which step is you know the most time consuming they're probably going to tell you data processing and data cleaning. Right? It's one of the most tiresome task because you need to look at all the values that are there. You need to find any missing values, any data that is not relevant to you, right? All of this has to be removed such that you can analyze the data in a better way. Now, step number four is exploratory data analysis. So, guys, this stage is all about getting deep into your data and finding all the hidden data mysteries. EDA or exploratory data analysis is like the brainstorming stage of machine learning. Data exploration involves understanding the patterns and the trends in your data. So at this stage all the useful insights are drawn and any correlations between the variables are understood. For example, in the case of predicting rainfall, we know that there is a strong possibility of rain if the temperature has fallen low. Such correlations have to be understood and mapped at this stage. ADA is actually the most important step in a machine learning process because here is where you understand your data to understand how your data is going to help you predict the outcome. Moving on to step number five, we have building a machine learning model. So all the insights and all the patterns that you got from your data exploration stage, those insights are used to build the machine learning model. So this stage always begins by splitting the data set into two parts that is training and testing data. Now remember that the training data will be used to build and analyze the model. The model is basically the machine learning algorithm that predicts the output by using the data that you feed to it. An example of machine learning algorithm is logistic regression and linear regression. All of these are machine learning algorithms. Now don't worry about choosing the right algorithm. Right? First, we'll focus on what the machine learning process is. But anyway, choosing the right algorithm will depend on several factors, right? It depends on the type of problem you're trying to solve, the data set, and the level of complexity of the problem. In the upcoming sections, we'll discuss all the different types of problems that can be solved by using machine learning. Moving on to step number six, we have model evaluation and optimization. Now after you build a model by using the training data set, it is finally time to put the model to a test. The testing data set is used to check the efficiency of the model and how accurately it can predict the outcome. Now once the accuracy is calculated and any further improvements in the model, they have to be implemented at this stage. Methods like parameter tuning and cross validation can be used to improve the performance of the model. Before I move any further, I don't know if all of you know what training and testing data set means. In machine learning, the input data is always divided into two sets. We have something known as the training data set and testing data set. So in machine learning, you always split the data into two parts. Right? This process is known as data splicing. Now the training data set will be used to build the machine learning model and the testing data set will be used to test the efficiency of the model that you built. This is what training and testing data set is. They're not any different data that you derived. They're the same as the input data set. The only thing is you are splitting the data set so that you can train the model on one data and test the model on another data. Now remember that the training data set is always larger in size when compared to the testing data set because obviously you are training and building the model by using the training data set. The testing data set is just for evaluating the performance of your model. Now let's move on and understand step number seven which is predictions. Now once the model is evaluated and you've improved the model, it is finally used to make predictions. The final output can be a categorical variable or it can be a continuous quantity. Right? All of this depends on the type of problem you're trying to solve. Don't worry, I'll be discussing the type of problems that can be solved using machine learning in the upcoming slides. In our case for predicting the occurrence of rainfall, the output will be a categorical variable. Categorical variable is anything that has some categorical value. For example, gender is a categorical variable. gender has either male, female or other. It has a defined set of values that is a categorical variable. So guys, that was the entire machine learning process. Now, as we continue with this tutorial, in the upcoming sections, I will be running a demo in Python in which we will be performing weather forecasting. So, make sure you remember all these steps that I spoke about because I'll be going through all these steps by using Python. We'll be coding all of this that we just spoke about. Now the next topic we're going to discuss is the types of machine learning. A machine can learn to solve a problem by following any one of the three approaches. You can say that there are three ways in which a machine learns. The three ways are supervised learning, unsupervised learning and reinforcement learning. These are the three methods in which you can train a machine to learn. So first let's discuss supervised learning. So what is supervised learning? Supervised learning is a technique in which we teach or train the machine by using data which is labeled. To understand this better, let's consider an analogy. As kids, we all needed guidance to solve math problems. At least I had a really tough time solving math problems. Yeah. So our teachers always helped us understand what addition is and how it is done. Similarly, you can think of supervised learning as a type of machine learning that involves a guide. The label data set is a teacher that will train the machine to understand the patterns in the data. The label data set is nothing but the training data set. So to better understand this, consider the figure right here. We are feeding the machine images of Tom and Jerry. And the goal is for the machine to identify and classify the images into two separate groups. Basically, one group will contain Tom images and the other group will contain images of Jerry. Now, pay attention to the training data set. The training data set that is fed to a model is labeled as in we're telling the machine, listen, this is how Tom looks and this is how Jerry looks. We're basically labeling each data point that we're feeding to the machine. Right? If the image is of Tom's, we've labeled it as Tom. And if the image is a Jerry image, then we're going to label it as Jerry. By doing this, you're training the machine by using labeled data. So to sum it up in supervised learning, there is a well-defined training phase done with the help of labelled data. Right? The rest of the process is the same. After you feed the machine label data, you're going to perform data cleaning, then exploratory data analysis followed by building the machine learning model and then model evaluation and finally your predictions. Also, one more point to remember is that the output that you're going to get in a supervised learning algorithm is a labeled output. This Jerry will be labeled as Jerry and this Tom Tom. Basically, you'll get a labeled output. Now let's understand what is unsupervised learning. Unsupervised learning involves training by using unlabelled data and allowing the model to act on that information without any guidance. So think of unsupervised learning as a smart kid that learns without any guidance. In this type of machine learning, the model is not fed with any label data. As in the model has no clue that this image is Tom and this image is Jerry. It figures out patterns and the differences between Tom and Jerry on its own by taking in tons of data. For example, it identifies prominent features of Tom such as pointy ears, bigger in size and so on to understand that this image is of type one. Similarly, it finds such features in Jerry and knows that this is another type of image maybe type two. Right? Therefore, it classifies the images into two different clusters without knowing who is Tom and who is Jerry. Now the main idea behind unsupervised learning is to understand the patterns in your data set and form clusters based on feature similarity. Basically it'll feature similar images or similar data points into one cluster and it'll form another cluster which is totally different from the first cluster. So look at the output over here. The unlabeled output is basically clusters or groups of two different data. Next we have something known as reinforcement learning. Now, reinforcement learning is comparatively different, right? It's pretty different from supervised and unsupervised. It is basically a part of machine learning where you put an agent in an environment and this agent learns to behave in the environment by performing certain actions and observing the rewards which it gets from these actions. To understand reinforcement learning, imagine that you were dropped off at an isolated island. What would you do? Initially, we'd all panic, right? But as time passes by, you will learn how to live on the island. You will explore the environment. You will understand the climate conditions. You'll understand the type of food that grows there. You'll know what is dangerous to you and what is not. You'll understand which food is good for you and which is not. This is exactly how reinforcement learning works. It involves an agent which is basically you stuck on the island that is put in an unknown environment which is the island where the agent must learn by observing and performing actions that result in rewards. Reinforcement learning is mainly used in advanced machine learning areas such as self-driving cars, Alph Go and so on. So guys, that sums up the types of machine learning. Before we go any further, I'd like to discuss the difference between supervised, unsupervised, and reinforcement learning. Now, first of all, we have the definition. Supervised learning is all about teaching a machine by using labeled data. Unsupervised learning, like the name suggests, there is no supervision over here. The machine is trained on unlabelled data without any guidance. Reinforcement learning is totally different. Here you have an agent who interacts with the environment by producing actions and discovers some errors and rewards. Now the type of problem that is solved using supervised learning is regression and classification problems. We'll discuss what regression, classification and clustering is in the upcoming slide. Right? So don't worry if you don't know what it is. Unsupervised learning is mainly to solve association and clustering problems. Reinforcement learning is for reward based problems. Now what is the type of data in supervised learning? It is labeled data. That is the main difference between supervised and any other type of machine learning. In supervised you have labeled data. In unsupervised we have unlabeled data. Whereas in reinforcement learning we have no predefined data at all. The machine has to perform everything from scratch. It has to collect data, analyze, do everything on its own. Now the training in supervised learning is external supervision. meaning that we have external supervision in the form of the labeled training data set. In unsupervised, there is obviously no supervision. There is an unlabeled data set. Therefore, there's no supervision. In reinforcement learning, there is no supervision at all. Now, the approach to solving supervised learning problem is basically you're going to map your labeled input to your known output. In unsupervised learning, the machine is going to understand the patterns and discover the output on its own. Reinforcement learning. Here the agent will follow something known as a trial and error method. Right? It's totally based on the concept of trial and error. Popular algorithms under supervised learning are linear regression, logistic regression, support vector machines and so on. Under unsupervised learning, we have the famous K means clustering algorithm. Under reinforcement learning, we have the Q-learning algorithm which is one of the most important algorithms. It is basically the logic behind the famous alpho game. I'm sure all of you have heard of that. So guys, these were the differences between supervised, unsupervised and reinforcement learning. Now let's move on and discuss the type of problems that you can solve by using machine learning. Now there are three types of problems in machine learning. Now any problem that needs to be solved in machine learning can fall into one of these three categories. Now what is regression? In this type of problem, the output is a continuous quantity. For example, if you want to predict the speed of a car given the distance, that means it is a regression problem. First of all, what is a continuous quantity? A continuous quantity is any variable that can hold a continuous value. A continuous variable is any variable that can have infinite number of values. For example, the height of a person or the weight of a person is a continuous quantity, right? I can have a weight of 50. 1 kgs or 50. 12 or 50. 112 kgs. This is a continuous quantity. Regression problems can be solved by using supervised learning algorithms. Another type of problem is the classification problem. Here the output is always a categorical value. Classifying emails into two classes. For example, classifying your email as spam and non-spam is a classification problem. Here again you'll be using supervised learning, classification algorithms such as support vector machines, naive bias, logistic regression and so on. Then we have clustering problem and this type of problem involves assigning the input into two or more clusters based on feature similarity. For example, clustering the viewers into similar groups based on their interest or based on their age or geography can be done by using unsupervised learning algorithms like K means clustering. One thing you need to understand is under supervised learning you can solve regression and classification problems. Under unsupervised learning you can solve clustering problems. Reinforcement learning is something else altogether. Right? that you can solve reward based problems and more complex and deep problems. So now let's move on and understand the different machine learning algorithms. Now I will not be going into depth for machine learning algorithms because there are a lot of algorithms to cover but we have content around almost every machine learning algorithm out there. So I'm just going to show you a hierarchal diagram of how the algorithms are structured. So under machine learning we have three types of learning. we have supervised, unsupervised and reinforcement. Under supervised learning, we have regression and classification problems. And under unsupervised learning, we have clustering problems. Reinforcement learning is completely different. I'll be leaving a link in the description specifically for reinforcement learning. You can check out the entire content of reinforcement learning there. Now, regression problems can be solved by using linear regression. algorithms such as linear regression, decision trees and random forest can also be used in regression problems. But usually decision trees and random forest all of these are used to solve classification problems. Famous classification algorithms include K nearest neighbor which is basically knn decision trees and random forest logistic regression, knive bias, support vector machines. All of these are classification algorithms. Coming to unsupervised learning, we have clustering and association analysis. Right? Clustering problems can be solved by using K means and association analysis can be solved by using a priority algorithm. A priority algorithm is mainly used in market basket analysis. Right? For this algorithm as well, I'll be leaving a link in the description. We've performed a very excellent demo wherein we've showed how market basket analysis can be done by using a priority algorithm. Markov model is also explained in one of the videos. I'll be leaving that link in the description box. Now to sum up machine learning to you, I'll be uh running a small demonstration in Python. Right? Like I promised earlier that we'll be using Python to understand the whole machine learning process. All right. So let's get started with that demo. So guys, for those of you who don't know Python, I will leave a couple of links in the description box so that you understand Python. But apart from that, Python is pretty understandable. If you just look at the code, you'll know what exactly I'm talking about, right? So don't worry. And also, I'll be explaining everything in the code. So I'm using PyCharm in order to run the demo, right? So guys, like I said, if you don't know Python, I'll leave a couple of links in the description box. The main aim of our demo is to build a machine learning model that will predict whether or not it will rain tomorrow by studying the past data set. Now, this data set contains around 145,000 observations on the daily weather conditions as observed in Australia, right? The data set has around 24 features and we will be using 23 features out of that to predict the target variable which is rain tomorrow. So this data set I collected from Kaggle right for those of you who don't know Kaggle is an online platform where you can find hundreds of data sets and you know there are a lot of competitions held by machine learning engineers and all of that. It's an interesting website. Now the problem statement itself is to build a machine learning model that will predict whether or not it will rain tomorrow. This is clearly a classification problem. The machine learning model has to classify the output into two classes that is either yes or no. Yes will stand for it will rain tomorrow and no will basically denote that it will not rain tomorrow. Right? This is a classification problem. So I hope the objective is clear. Right? So we'll begin the demonstration by importing the required libraries. So first of all for mathematical computations we'll be importing the numpy library. We'll also be importing the pandas library for data processing. Next, we will load the CSV file. Basically, my data is stored in a CSV format in this file. Weather ocsv is my data set. So, basically, I've saved this file in this path. Right? So, that's what I'm doing here. I'm loading my data set and I'm storing it in a variable known as df. Next, what we'll do is we'll see the size of our data frame. Let's print the size of the data frame. We'll also display the first five observations in our data frame. So let's look at the output. Basically around 145,000 observations and 24 features. Now 24 features are basically the variables that are there in my data set. You know for example date is a variable, location is a variable, minimum temperature, till rain tomorrow. All of these are variables. So I have around 24 features in my data set. Right? Now the variable that I have to predict is rain tomorrow. Okay. If the value of rain tomorrow is no, it denotes that it will not rain tomorrow. But if the value is yes, then it will denote that it will rain tomorrow. So rain tomorrow is basically my target variable. Right? I'll be finding out whether it's going to rain tomorrow or not. So this is my target variable also known as your output variable. My input variables will be the other 23 variables. date, location, minimum temperature, rain today, risk, all of this will be my input variables. Now, these variables are also known as predictor variables. Basically, they used to predict your outcome. So, these are also known as predictor variables. Now, the next step is checking for null values. This is basically data prep-processing. Let me just comment it for you. This is data preparation or data right so this stage is data prep-processing right here we start checking for any null values or any missing values this exactly what I'm doing over here I am checking for any missing or null values in my data set if you notice the output it shows that the first four columns have more than 40% null values right so it's always best for us to remove features or such variables because they'll not help us in our prediction. Now during data prep-processing, it is always necessary to remove the variables that are not significant. Unnecessary data will just increase our computations. That's why it's always best if we remove the unwanted or unnecessary variables. Now, apart from removing these four variables, we'll also remove the location variable and we will remove the date variable. Right? I'll come to this variable in a minute. We'll also be removing location and date variable because both of these variables are not needed in order to predict whether it'll rain tomorrow. Right? We do not need to know the location and the date. Now, we'll also be removing this risk mm variable. Risk MM variable basically tells us the amount of rain that might occur the next day. Right? Now, this is a very informative variable and it might actually leak some information to our model. By using this variable, we'll easily be able to predict and there's no point of doing that. This variable will give us too much information, right? So that's why we're going to remove this variable as well. It'll leak a lot of information. So after that, if you print the shape of your data frame, we have only 17 variables and so many observations. Now after this, we'll just uh look at any null values and we'll remove them. This drop any function will just remove all the null values. Right? Then if you print the shape of your data frame, we'll have around 112,000 rows with 17 variables. This is the shape of the data set after removing all the null values and all the redundant or unnecessary variables. Now it's time to remove the outliers in the data. So after you remove any null values, we should also check our data set for any outliers. An outlier is a data point that is very different from your other observations. Outliers usually occur because of miscalculations while collecting the data. These are some sort of errors in your data set. So in this whole code snippet, we're just getting rid of outliers. This is the output that we get all our outliers. Next, what we'll be doing is we will be assigning zeros and ones in the place of yes and no. The only thing is we're going to change the categorical variables from yes and no to 0 and one. Right? That's exactly what we're doing over here. Now, if there are any unique values such as any character values which are not supposed to be there, we'll be changing them into integer values. That's all we're doing over here. After this, we'll be normalizing our data set. This is a very important step because in order to avoid any biasness in your output, you have to normalize your input variables. Right? To do this we can make use of the min max scala function which python provides in a package known as skqarn. You can use that package in order to normalize your data set. So after normalizing our data set this is what our data set looks like. This is before normalization. You can see that these are in two digits whereas these values are in single digits. Right? This causes a lot of biasness. But once we normalize the values we know that all of the values are in a similar range. We have everything in decimals, right? So normalization is something that has to be performed because if you have a data set like this, your output is not going to be correct, right? That's why we perform normalization. So now that we are done with pre-processing, what we're going to do is it's time for exploratory data analysis. This is exploratory data analysis. So basically here what we're going to do is we're going to analyze and identify the significant variables that will help us predict the outcome. To do this we'll be using the select K best function which is present in the SQL library. There's a predefined function in Python called select K best which will basically select the most significant predictor variables in our data set. When we run that line of code, we get these three variables to be the most significant variables in our data set. Right? The main aim of this demo is to make you understand how machine learning works. That's why to simplify the computation, we'll assign only one of these significant variables as the input. Instead of taking all three uh variables as input, we'll select one variable and we'll take that as the input and the output is the rain tomorrow variable. So basically we are creating a data frame of all the significant variables. Basically we're choosing this variable in order to predict our outcome. Obviously our outcome is rain tomorrow variable. So our input is humidity level and our output is to detect whether it'll rain tomorrow. The next step is data modeling. All of you are aware of what data modeling is. To solve this we'll be using classification algorithms. Over here we'll use logistic regression. We will use random forest classifier which is another machine learning algorithm. We'll also use the decision tree classifier and support vector machine. Right? We'll be using all of these algorithms in order to predict the outcome. We'll also check which algorithm gives us the best accuracy. So guys, we're just using multiple algorithms or multiple classification algorithms on the same data set. We're not doing anything very complex over here. So we start by importing all the necessary libraries for the logistic regression algorithm. We're also going to import time because we'll be calculating the accuracy and the time taken by the algorithm to get the output. So the first step is data splicing. I've already mentioned data splicing is splitting your data set into your testing data set and into your training data set. So 25% of your data is assigned for the testing data and the remaining 75% is your training data. Here you're creating the instance of the logistic regression algorithm. This is an instance that you created. Then you'll fit the model by using your training data set. So basically to build your machine learning algorithm, you'll be fitting your training data set. So X train and Y train variables have your training data set. After that you will be evaluating the model by using your testing data set. Then you'll calculate the accuracy score. Right? I'll also be printing the accuracy using logistic regression and the time taken using logistic regression. Let's look at the accuracy. Don't worry about these warnings. They are not important. So accuracy using logistic regression is around 0. 83% which is 83% accuracy approximately 84%. And this is the time taken. So the accuracy is actually pretty good, right? 84% is a good number. Then we have random forest classifier. Here again we'll import the libraries that are needed to run random forest classifier. Then we again calculating the accuracy and the time taken by the classifier. Data splicing like I mentioned splitting the data into testing and training data set. Then you're just building the model by using the training data set. After that you'll evaluate the model by using the testing data set. and you'll finally calculate the accuracy. The accuracy using random forest is again approximately 84% which is a really good number. Then we have decision tree classifier. Here again we'll be importing the libraries needed for this classifier. We'll be calculating the accuracy and the time taken by this classifier. Data splicing followed by building the model by using the training data set. evaluating the model by using the testing data set and finally calculating the accuracy and printing the accuracy. So let's see the accuracy using decision tree classifier. Again we have an accuracy of around 83 to 84%. This is a pretty good number. And last we're going to do this by using another classification algorithm known as support vector machine. Here again we are importing the needed libraries. Then we're calculating the accuracy and the time. Performing data splicing. Then we're building the model by using the training data set. Testing the model using the testing data set. And finally printing the accuracy. So guys, all the classification models gave us an accuracy score of approximately 84% to 83%. So this is exactly how a machine learning process works, right? You begin by importing all your data. Then you perform data prep-processing or data cleaning. After that you perform exploratory data analysis where you understand the important patterns or the important variables in your data set. After that you build a model. Then you will evaluate the model by using the testing data set and finally calculate the accuracy. I showed you all the steps in the machine learning process by using a practical demonstration in Python. So guys give yourself a pat on the back because we just understood the whole machine learning process with a small implementation in Python. Now let's move on to our next topic which is limitations of machine learning. Before we understand what deep learning is, it's important to know the limitations of machine learning and why these limitations gave rise to the concept of deep learning. One major problem in machine learning is machine learning algorithms and models are not capable of handling highdimensional data. Right? We can take in data with 20 to 30 feature variables. But when it comes to data sets which have thousands of variables, machine learning does not work. Machine learning is not capable enough to process that much data. So highdimensional data cannot be analyzed, processed and modeled by using machine learning. Another limitation is that it cannot be used in image recognition and object detection because these applications require the implementation of highdimensional data. Another major challenge in machine learning is to tell the machine what are the important features it should look for in order to precisely predict the outcome. So basically you're selecting the important features for the machine learning model and you're telling them like these are the important features and this is what you should use in order to build the model. This process is known as feature extraction. Now in machine learning this is a manual process. You're going to manually input as a programmer. You're going to tell that these are the important predictor variables. But what happens when your data set has hundreds of variables? How are you going to sit and choose every variable and perform analysis on each variable to understand which is a really significant variable? That's going to become a very tedious task, right? It's not possible for you to manually sit down with 100 variables, check the correlation with each variable and understand which variable is significant in predicting the output. So performing feature extraction manually is very tedious and that is one of the major limitations of machine learning. Now deep learning comes to the rescue to all of these problems. So let's understand what deep learning is and why we have deep learning in the first place. So deep learning is actually one of the only methods by which we can overcome the challenge of feature extraction. This is because deep learning models are capable of learning to focus on the right features by themselves requiring minimal human intervention. Meaning that feature extraction will be performed by the deep learning model itself. You don't have to manually tell that this feature is important, that feature is important, choose this feature for predicting the output. All of this is not needed in deep learning. The model itself will learn which features are most significant in predicting the output. Also, deep learning is mainly used to deal with highdimensional data. Right? It is based on the concept of neural networks and is often used in object detection and image processing. This is exactly why we need deep learning. It solves the problem of processing highdimensional data and manual feature extraction. Now, how exactly does deep learning work? Now, deep learning mimics the basic component of the human brain called the brain cell. The brain cell is also known as a neuron. So inspired from a neuron, an artificial neuron was developed. Deep learning is based on the functionality of a biological neuron. So let's understand how we mimic this functionality in an artificial neuron. Now guys, an artificial neuron is also known as a perceptron. Let's understand what this biological neuron does and how deep learning is based on this concept. In a biological neuron, you can see these dendrites, right? In this image you see something known as dendrites. These dendrites are used to receive any input. These inputs are summed in the cell body and through the axon it is passed on to the next neuron. So similar to the biological neuron, a perceptron or a artificial neuron receives multiple inputs, applies various transformations and functions and provides an output. Right? So that's how artificial neural networks or that's how deep learning works. Now guys, the human brain consists of multiple connected neurons called a neural network. Similarly, by combining multiple perceptrons, we've developed what is known as deep neural networks. The main idea behind deep learning is neural networks. And that's what we're going to learn about. So now let's understand what exactly deep learning is. Deep learning is a collection of statistical machine learning techniques used to learn feature hierarchies based on the concept of artificial neural networks. So the main idea behind deep learning is to use the concept of neural networks. A deep neural network will have three layers. Okay, there's something known as the input layer followed by the hidden layers and then we have the output layer. The input layer is basically the first layer and it receives all the inputs. So all the inputs are fed into this input layer. The last layer is obviously the output layer. This layer will provide your desired output. Now all the layers between the input and your output layer are known as the hidden layers. Now the number of hidden layers in a deep learning network will depend on the type of problem you're trying to solve and the data that you have. We'll get into depth of what exactly a hidden layer does. But for now, this is how a neural network is structured in deep learning. So guys, uh, deep learning is used in highly computational use cases such as phase verification, self-driving cars and so on, right? So let's understand the importance of deep learning by looking at a real world use case. So I'm sure all of you have heard of the company PayPal. Now, PayPal makes use of deep learning to identify any possible fraudulent activities. So, the company makes use of deep learning for fraud detection. Now, PayPal recently processed over $235 billion in payments from 4 billion transactions by its more than 170 million customers. So, basically it process this much data by using deep learning. PayPal uses machine learning and deep learning algorithms to mine data from the customer's purchasing history in addition to reviewing patterns of any sort of fraud stored in the database and it will do this to predict whether a particular transaction is fraudulent or not. Now the company has been relying on deep learning and machine learning technology for around 10 years. Initially the fraud monitoring team used simple linear models right they use machine learning but over the years the companies switched to more advanced machine learning technology called deep learning. This shows how deep learning is used in more advanced and more complicated use cases. the fraud risk manager and the data scientist at PayPal. He quoted that what we enjoy from more modern advanced machine learning is its ability to consume a lot more data, handle layers and layers of abstraction and be able to see things that a simpler technology would not be able to see. Even human beings might not able to see. This is exactly what he quoted. He said that a simple linear model is capable of consuming around 20 variables but with deep learning technology you can run thousands of data points. He also quoted that there is a magnitude of difference you'll be able to analyze a lot more information and identify patterns that are a lot more sophisticated. So by implementing deep learning technology, PayPal can finally analyze millions of transactions to identify any fraudulent activity. This is how PayPal makes use of deep learning. Not only PayPal, we also have Facebook, right? Facebook makes use of deep learning technology for face verification. You've all seen the tagging feature at Facebook where we tag our friends in photos. All of that is based on deep learning and machine learning. So guys, that was a real world use case to make you understand how important deep learning is. Now let's move on and look at what exactly a perceptron is. Right? We'll be going in depth about deep learning. A perceptron is basically a single layer neural network that is used to classify linear data. It is the most basic component of a neural network. Now a perceptron has four important components. It has something known as inputs, weights and bias, summation functions, activation and transformation functions. These are four important parts of a perceptron. Now before I discuss this diagram with you, let me tell you the basic logic behind a perceptron. There is something known as inputs, right? The input X here you can see X1, X2 till Xn. So let me explain the structure of a perceptron. What you're going to do is you're going to input variables into the perceptron. Right? This x1, x2 till xn basically stands for input. W1, w2 till wn stands for the weight assigned to each of these inputs. Right? There is a specific weight that will be randomly initialized in the beginning for each of your input. Next, you have something known as the summation element. Here what you do is you multiply the respective input with the respective weight and you add all these products right that is basically your summation function. After this is what is your transfer function also known as activation function. Right? The activation function will basically map your input to your desired output. So your input will go through these processes. It'll go through summation and activation function in order to get to the output. So guys remember that the neural networks work the same way as a perceptron. So if you want to understand how deep neural networks work, you need to understand what a perceptron does. A deep neural network is nothing but multiple perceptrons. So let me tell you how the entire thing works once again. So basically all your inputs are multiplied with their respective weights. Now you add all the multiplied values and you call them as a weighted sum. You use the summation function to add all of this. After that you apply the weighted sum to the correct activation function or the transfer function. Activation function is very similar to a function in our brain. The neurons become active in our brain after a certain potential is reached. That threshold is known as the activation potential. So mathematically there are a few functions which represent the activation function. Basically the signum, the sigmoid, the tan all of these are activation functions. You can think of activation function as a function that maps the input to the respective output. Then I spoke about something known as weights and biases. All right. Now you must be wondering why do we have to assign weights to each of our input. Weights basically show the strength of a particular input or how important a particular input is for predicting the output. In simple words, the weightage denotes the importance of an input. Bias is basically a value which allows you to shift the activation function curve in order to get a precise output. Right? So that's exactly what weights are. I hope all of you are clear with inputs, weights, summation and activation function. Also one important thing I forgot to mention in a perceptron is a single layer perceptron will have no hidden layers. Right? There'll only be an input layer and output layer and a couple of transformation functions in between. That's all will be there in a perceptron. Now a perceptron like I mentioned is used to solve only linear problems. If you look at this data distribution, how do you think we can solve this? This data is not linearly separable. So you cannot use a single layer perceptron to separate this data. Right? That's why we need something known as a multi-layer perceptron with back propagation. I'll be explaining this in the next slide. So complex problems that involve a lot of parameters and highdimensional data can be solved by using multiple layer perceptron. Now a multi-layer perceptron is the same as a single layer perceptron. The only difference is that a multi-layer perceptron will have hidden layers. So the number of hidden layers in a model depends upon various factors. I told you it depends on the complexity of the problem you're trying to solve. It depends on the number of inputs in your data and so on. So it works in the same way. All your inputs are multiplied with your weights and then you do the summation and then there is a transformation function or a activation function. While designing a neural network in the beginning itself I told you we initialize weights with some random values. We do not have some specific value for each weightage. Initially we've selected random values. It is always important that whatever weight values we have selected will be correct. Now whatever weight values we've assigned to each input it denotes the importance of that input variable. So we need to assign the weights in such a way or we need to update that it denotes the significance of that particular input. So initially we are selecting some random value for weight and let's say that we use this weight value to get our output. Now what happens is the output is actually way different or it is not precise when compared to our actual output. Basically the error value is very huge. So how will you reduce the error? The main thing in a neural network is the weightage that you give to a input variable. Right? Depending on variable, you're telling the neural network how important that variable is. Now what if you randomly give some weightage and your output is wrong. The first thing that comes into your mind is that you need to change the weight because the weight signifies the importance of a variable. So basically what we need to do is we need to somehow explain to the model to change the weight in such a way that the error becomes minimum. Let's put it in another way. So basically we need to train our model. One way to train our model is called as back propagation. So in back propagation what happens is once you've initialized a weight to each of the input you calculate the output right you get an output and let's say you have a very high error value in that output what you'll do is you'll back propagate as in you'll go back to the weight and you'll keep updating the weight in such a way that your error becomes minimum. This is exactly what back propagation is. You'll be going back to the first layer. You'll be updating each of the weights in such a way that your output is more precise. So guys, basically the weight and the error in a neural network is highly related. By updating the weight in a particular way, your error will decrease. So you need to figure out how you need to update the weight. Do you have to increase the weight or decrease the weight? Once you figure out whether you have to increase or decrease the weight, you have to just follow that direction in such a way that your error is minimized. That's exactly what back propagation is. So the final output of back propagation is you're going to select the weight that minimizes the error function and then you're going to use that weight to solve the whole problem. Right? This is what back propagation is about. Now in order to make you understand deep neural networks, let's look at a practical implementation. So again guys, I'll be using Python to run the demo. If you don't have a good idea about Python, check the description. I'll leave a couple of links about Python programming. Now in this demo, I'll be walking you through one of the most important applications of deep learning. I will demonstrate how you can construct a high performance model to detect credit card fraud. Right? We'll be using deep learning models to do this. Now, before that, let me just tell you something about our data set. All right. The data set contains transactions made by credit cards in the year September 2013 by European card holders. This data set presents transactions that occurred in 2 days where we have 492 frauds out of 285,000 transactions. Approximately 285,000 transactions. Out of these transactions, 492 were frauds and the data set is quite unbalanced, right? The positive class accounts for 0. 172%. So the positive class basically the fraudulent class. So again we're going to start by importing the required packages. We're going to import kas mattplot library and skarn for pre-processing. Right? Again minmax scala which is for normalization. We're going to import our data set and store it in this variable. Right? This is the path to my data set. My data set is in the CSV format or also known as commaepparated version. Now, we're going to print out the first five rows of our data set. Right? Let's take a look at the output. So, here is the time of the transaction. V_sub_1, V_sub2, V_sub3, etc. These are all the features of our data set. I'm not going to go into depth of what these features stand for because this demo is all about understanding deep learning. Now these V1, V2, V3, these are all predictor variables which will help us predict our class. So guys don't worry about what these features are. These features are just information and details about your transaction such as the amount you spend or the time of transaction and so on. So here we have the amount variable which denotes the amount spent. After that we have the class variable. Now this class variable is your output variable or your target variable. So your class is basically your output variable. Value zero denotes that there has been no fraudulent activity. But if you get a class of one, it means that this transaction is a fraudulent transaction. For example, this transaction is not fraudulent. That's why we have a value of zero over here. All right. So this is our data set. Next what we're doing is we're counting the number of samples for each class. Right? We have class 0 and class one where in class 0 denotes uh the normal transaction which is non-fraudulent transaction and class one will denote the fraudulent transactions. Right? So we have around 492 fraudulent transactions and around 284,315 non-fraudulent transactions. So when you see this you know that our data set is highly unbalanced. Highly unbalanced means that one class has a really small number when compared to the other class. Right? There's no balance between the two classes. So here what we're doing is we are sorting the data set by class for stratified sampling. Stratified sampling is a statistical technique for sampling your data set. Now this type of sampling is always good if you have an unbalanced data set. Next what we're going to do is we're going to perform data prep-processing. Data prep-processing in deep learning mainly has a method known as dropout method. Next, what we're going to do is we're going to uh drop out the entire time column. We do not need the time of the transaction in order to understand if the transaction was fraudulent or not. Right? So that's why we're getting rid of unnecessary variables. Right? So we're dropping out that variable. So after dropping out the time variable, we're going to assign the first 3,000 samples to our new data frame. Right? This df sample will have our first 3,000

### [3:17:04](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=11824s) Artificial Neural Network

samples. And we're going to use those 3,000 samples. So here we're just counting the number of class for each of these samples. After that, we're just counting the number of samples for each of the class. Right? We're doing the same thing again. And here we get class 0 has 258 samples and class one has 492 samples. Now this makes the data set quite balanced right it's very balanced when compared to our old data set. Next we'll just randomly shuffle our data set right in order to remove any sort of biasness in the data. After that we'll split our data set into two parts. One is for training and your other data set is for testing. Right? This is also known as data splicing. Then we'll be splitting each data frame into feature and label. meaning that your input and your output. You'll be doing this for your training data and for your testing data, right? All you're doing is you're separating your input from your output. Next, we're looking at our training data set, right? We're printing the shape of our training data set. The training data set has around 2,400 observations and 29 variables or 29 features. Similarly, we'll be printing out the size of our test data frame. Right? That's exactly what we're doing over here. After that we'll perform normalization. Right? For this we'll be using the minm max scaler. So in normalization we'll basically be scaling all our predictor variables around the same range so that there is no biasness in our prediction. After this we'll be plotting a function for each of the learning curves. For your training phase and for your testing phase you'll be plotting a learning curve. Now I'll show you the output of this in a couple of minutes. For now, let's move on to the main part which is model creation. Right? In this demo, we'll use three fully connected layers. We'll also use dropout technique. Now, dropout is a type of regularization technique that is used to avoid any sort of overfitting in a neural network. It is a technique where you select neurons and you drop them during the training phase. We'll be using the ReLU as the activation function which is a type of activation function just like sigmoid and tanage. So the type of model that we'll be using is the sequential model. Right? Sequential is the easiest way to build a model in KAS. Right? We're using the KAS library over here. If you remember, I imported that in the beginning. Right? It allows you to build a model layer by layer. So each layer has weights that correspond uh to the layer that follows it. After this, you'll use uh the add function to add the dense layers. Basically, your hidden layers you're going to add over here. So in our model we'll be adding two dense layers or hidden layers you can say. So here what we're doing is we're adding the first dense layer. Now guys a dense layer is standard layer type that works for most cases. Right? In a dense layer all the nodes in the previous layer connect to current layer. So guys don't get too involved into what exactly is happening here. All I'm doing is I'm creating a sequential model and what is happening is I'm just assigning the number of inputs for each of the dense layer or for each of the hidden layer. I'm also assigning dropout value. Dropout is basically to prevent overfitting. Overfitting might occur when your model memorizes the training data set. Overfitting basically reduces the accuracy of a model. That's why we're using the dropout method to prevent overfitting. So in the first hidden layer we have around 200 units right we have the activation function relu then we're adding the second dens layer with again 200 neurons and the relu activation function kernel initializer is uniform meaning that it's just sequential and normal then we're again adding a dropout layer of 0. 5 the dropout value of a network has to be chosen very wisely okay a value that is too low will result in a minimal effect and a value that is too I will result in underlearning by the network. So 0. 5 is a standard dropout value. And now this last layer is our output layer. In the output layer, we'll obviously have only one neuron. We'll have one neuron that will show us the output class either zero or one. Zero will show us non-fraudulent transactions and one will denote fraudulent transaction. Right? That's why we have only one neuron over here. And the activation function here is sigmoid. Right? Since the number of neurons is only one. After that we're printing the model summary. Now I'll show you the summary and everything. Before that let's just understand what exactly optimization functions are. We'll understand what this optimization function does. Now an optimizer takes care of the necessary computations that are used to change the network's weights and bias. So basically your optimizers will take care of all your computations such as changing the weight or updating the weight. If you'll remember I spoke about back propagation right where you'll update the weight and all of that is done by using optimizers. Here we're selecting an optimizer known as the Adam optimizer. So ADOM optimizer is one of the current default optimizers in deep learning. Right? It stands for adaptive moment estimation. We don't have to get into the depth of all of this. Right? All of these are predefined optimizers in our KAS package itself. After this, we're going to fit our model by using the training features. We're also setting 200 epochs and also there's something known as epoch and batch size. Right? We're setting epox as 200 and batch size as 500. I'll tell you what exactly this means. Now, batch sizes are basically used so that we don't overfit our model. Right? We're going to basically split our data set into 500 batches. So our input will be going in the form of batches, right? And our batch size is 500 inputs per batch and we'll be going through 200 epochs. Meaning that our training will iterate 200 times. This is basically the number of times we're training our model. All right, that's what epoch and bat size is. After that, we're just showing our training history and we're just printing the accuracy curve for our training phase. We're also going to print our loss curves for our training phase. basically the error curves and then finally we have the evaluation. Here we'll be testing our model by using our testing data set. Then we're finally printing the accuracy on our testing data set. After that we're just going to plot a heat map which I'll be showing you all. Let me just show you the output. So guys, in this entire line of code, all we're doing is we're printing an accuracy plot, right? Basically, we're printing a heat map. I'll show you what the heat map looks like. This is just to check the accuracy. We're comparing all the uh correctly predicted values to our incorrectly predicted values. So this is our training history here. Blue stands for our training phase and this is our validation or our prediction stage. That was our training curve and this is our loss curve. Now when you compare it to the actual validation stage, it's quite similar, right? Meaning that our model is doing pretty well. So guys, this is the heat map that I was talking about. This is basically going to give us the class for each of our predictions, right? It basically plots the classes that we correctly predicted, right? Basically for each data point, it's just going to tell us whether we predicted it correctly or not. It's sort of a confusion matrix in the form of a heat map. So guys, these are all our epochs. Basically, the 200 iterations that we went through, right? This is the 50th iteration is showing us our loss. It's showing us our accuracy as well. Right? that here we have 88% 90% 92%. Now if you carefully look at the epoch accuracy values you see that as we train our model even more our accuracy keeps increasing. Initially our accuracy was around 83 right at epoch number 15 our accuracy was around 83%. But as we kept training our model a little bit more our accuracy kept increasing. we have 90, we have 91, 94, 95, 96, and so on. Right? So basically, the more you train your model, the better it's going to be. So guys, this was our entire demo. Now, in the end, I'm printing out the false positive rate and the false negative rate. Right? All of this basically denotes how many of the data points was I correctly able to predict as fraudulent and how many did I predict wrongly. That's all the false negative and the false positive rate denotes. So guys, this was the entire demo on deep learning. Now, if you have any doubts regarding the deep learning demo, please mention them in the comment section and I will solve your queries. Right now, let's look at our last topic for the day, which is natural language processing. Now, before we understand what is natural language processing, let's understand the need for natural language processing and a process known as text mining. Text mining and natural language processing are heavily correlated, right? I'll talk about both of these in the upcoming slides. For now, let me tell you why we need natural language processing or text mining. So guys, the amount of data that we're generating these days is unbelievable. It is a known fact that we are creating 2. 5 quintillion bytes of data every day and this number is only going to grow. With the evolution of communication through social media, we generate tons and tons of data, right? The numbers are on your screen. So basically we post around 1. 7 million pictures on Instagram per minute, right? I'm talking about posts per minute. All of these numbers are per minute values. These are the amount of tweets. 347,000 tweets per minute, right? This is a lot of data. We're generating data while we're watching YouTube videos, when we're sending emails, when we are chatting and all of that, right? Even the IoT devices at our house, right? We have Alexa, all of this is generating a lot of data. A single click on your phone is generating a lot of data. Now, not only that, out of all the data that we generate, only 21% of the data is structured and well formatted, right? The remaining of the data is unstructured. And the major sources of unstructured data include text messages from WhatsApp, Facebook likes, comments on Instagram, the bulk emails and all of this, right? All of this accounts for the unstructured data that we have today. Now the data we generate is used to grow a business. So by analyzing and mining the data we can add more value to a business. This is exactly what natural language processing and text mining is all about. Text mining and NLP is a subset of artificial intelligence wherein we try and understand the natural language text that we get from text messages and so on in order to derive useful insights and grow businesses by using these insights. So what exactly is text mining? Text mining is a process of deriving meaningful insights or information from natural language text. So all the data that we generate through text messages, emails and documents are written in natural language text, right? And we're going to use text mining and natural language processing to draw useful insights or patterns from such data in order to grow a business. Now let's understand where exactly do we make use of natural language processing and text mining. Now have you ever noticed that if you start typing a word on Google, you immediately get suggestions, right? This feature is known as autocomplete. it will basically suggest the rest of the word to you. We also have something known as spam detection. Right, here's an example of how Google recognizes this misspelling Netflix and shows results for the keyword that matches your misspelling. Let me show you a couple of more examples. We also have predictive typing and spell checkers and features like autocorrect, email classification. So predictive typing and spell checkers all of these are applications of natural language processing. All of this basically involves processing the natural language that we use and deriving some useful information from it, right? Or running businesses from it. Netflix uses natural language processing in a really good fashioned way, right? It basically studies the reviews that customer gives for a particular movie and it tries to figure out if that movie is good or bad depending on the review. So Netflix actually uses NLP in a very interesting manner. It tries to understand the type of movies that a person likes by the way a person has rated the movie or by the way the person has reviewed a movie. So by understanding what type of review a person is giving to a movie, Netflix will recommend more movies that you like. That's how important NLP has become. Now let's look at what exactly NLP is. NLP which also stands for natural language processing is a part of computer science and artificial intelligence which deals with human language. Right? It's basically the process of processing natural language in order to derive some useful information from it. For those of you who have studied natural language processing or have heard of natural language processing, there is a huge confusion between text mining and natural language processing. So text mining is the process of deriving highquality information from text. But the overall goal is to turn the text into data for analysis by using natural language processing. So basically text mining is implemented by using natural language processing techniques. Right? There are various techniques in natural language processing that can help us perform text mining. That's how text mining and natural language processing are related. Natural language processing is the techniques that are used to solve the problem of text mining, text analysis and all of that. Let's look at a couple more applications. Sentimental analysis is one of the major applications of natural language processing. You see Twitter performs sentimental analysis. Facebook, Google, all of these perform sentimental analysis. Sentimental analysis mainly used to analyze social media content that can help us determine the public opinion on a certain topic. Then we have chat bots. Now chatbots use natural language processing to convert human language into desirable actions. We also have machine translation. NLP is used in machine translation by studying the morphological analysis of each word and translating it to another language. Advertisement matching is also done using NLP in order to recommend ads based on your history. Right? These are few of the applications of NLP. Now let me tell you the basic terminologies under natural language processing. So tokenization is the most basic step in natural language processing. Tokenization means breaking down the data into smaller chunks or tokens so that they can be easily analyzed. So the first step is you'll break a complex sentence into words. Then you'll understand the importance of each of the word with respect to that sentence in order to produce a structural description on an input sentence. So for example, take this sentence. How would I perform tokenizations on the sentence? Let's say that tokens are simple is a sentence and I want to perform tokenization on the sentence. This is what I'm going to do. I'm going to split the sentence into different words. I'm going to understand each word with respect to that sentence. Right? This is done to simplify operations in natural language processing. Right? It's always simpler to analyze a single token instead of analyzing an entire sentence. Then we have something known as stemming. Now look at this example right here. We have words such as detection, detecting, detected and detections. We all know that the root word for all of these words is detect. So stemming algorithm basically does that. It works by cutting off the end or the beginning of the word and taking into account a list of common prefixes and suffixes that can be found in an inflicted word. stemming basically helps us in analyzing a lot of words. We know that detections detected and detection basically mean the same thing. So all we're doing is we're going to ease our analysis by removing prefixes and suffixes which do not make sense. Right? We just need to understand the morphological analysis of the word. Right? So that's why we're randomly cutting the prefixes and suffixes in such a way that we only get the important part of the word. This is called stemming. Now this cutting of words can be successful in some occasions but not always. That is why we say that stemming approach has a few limitations. In order to get over these limitations, we have a process known as lemitization. Right? Latization on the other hand takes into consideration the morphological analysis of the words. It does not randomly cut the word in the beginning and the ending. It understands what the word means and only then it cuts the word. For example, let's consider the word recap. If we perform stemming on the word recap, we'll get cap, right? The output will be cap. But cap and recap do not have the same meaning, do they? They have absolutely different meanings. That's why stemming is sometimes not considered to be the right thing to do. But when it comes to lemitization, it's going to understand the meaning of recap. only then will it perform any sort of change in the word or it'll cut down the word. So basically it groups together different inflicted forms of a word called lema. Lemitization is similar to stemming because it maps several words into one common root. But the output of a lemitization process is always a proper word. An example of lemitization is to map gone going and went into go. Gone, going, went, all of them mean go. So basically by lemitization, you can just output the words as go. That is what lemitization is. Next we have something known as stop words, right? Stop words are basically a set of commonly used words in any language, right? Not just English, any language. The reason why stop words are critical to many applications is that if we remove the words that are very commonly used in a given language, we can finally focus on the important words. For example, in the context of let's say you open up Google and you look for strawberry milkshake recipe. Instead of typing strawberry milkshake recipe, let's say you type how to make strawberry milkshake. Now here what Google will do is it will find results for how to and make. Instead if you just type strawberry milkshake recipe you'll get the most desired output. That's why it's always uh considered a good practice in natural language processing to get rid of stop words. Right? Stop words will just increase our computation and it'll just add additional work to us. They are not very helpful when we are analyzing important documents. Right? We need to focus on the important keywords in the documents instead of all of these commonly used words. Example of stop words include the, how, when, why, not, yes, no. All of these are stop words. Right? So in order to better analyze our data, we need to get rid of stop words. Now the last terminology I'm going to discuss is document term matrix. Now it is important to create something known as the document term matrix in natural language processing. A DTM or a document term matrix is basically a matrix that shows the frequency of words in a particular document. Let's say that we're trying to understand if the sentence this is fun is available in one of my documents. So if it is there in my document one, I'm going to put a one corresponding to each of the words that is available in my document. For example, in document two, I have this is but I do not have the word fun. Similarly in document 4 I have the word this but I do not have the word is and fun. So basically a document term matrix is like the frequency matrix of a document. So during text analysis you always begin by building a document term matrix. Right here you try to understand which words frequently occur and which words are important and not important in the document. So guys these were a couple of terminologies in natural language processing. So this is the problem statement guys. We need to figure out if the bank nodes are real or fake and for that we'll be using artificial neural networks and obviously we need some sort of data in order to train our network. So let us see how the data set looks like. So over here I've taken a screenshot of the data set with few of the rows in it. Data were extracted from images that were taken from genuine and forged banknotelike specimens. After that, wavelet transform tools were used to extract features from those images. And these are a few features that I'm highlighting with my cursor. And the final column or the last column actually represents the label. So basically label tells us to which class that pattern represents whether that pattern represents a fake note or it represents a real node. Let us discuss these features and labels one by one. So the first feature or the first column is nothing but variance of a wavelet transformed image. The second column is about skewess. The third is courtes of wavelength transformed image. And finally, fourth one is entropy of the image. After that when I talk about label which is nothing but my last column over here if the value is one that means the pattern represents a real node whereas when value is zero that means it represents a fake node. So guys let's move forward and we'll see what are the various steps involved in order to implement this use case. So over here we'll first begin by reading the data set that we have. We'll define features and labels. After that we are going to encode the dependent variable. And what is a dependent variable? It is nothing but your label. Then we are going to divide the data set into two parts. One for training, another for testing. After that, we'll use TensorFlow data structures for holding features, labels, etc. And TensorFlow is nothing but a Python library that is used in order to implement deep learning models or you can say neural networks. Then we'll write the code in order to implement the model. And once this is done, we will train our model on the training data. We'll calculate the error. The error is nothing but your difference between the model output and the actual output. And we'll try to reduce this error. And once this error becomes minimum, we'll make prediction on the test data and we'll calculate the final accuracy. So guys, let me quickly open my PyCharm and show you how the output looks like. So this is my PyCharm guys. Over here I've already written the code in order to execute the use case. I'll go ahead and run this and I'll show you the output. So over here as you can see with every iteration the accuracy is increasing. So let me just stop it right here. So we'll move forward and we'll understand why we need neural networks. So in order to networks we are going to compare the approach before and after neural networks and we'll see what were the various problems that were there before neural networks. So earlier conventional computers use an algorithmic approach that is the computer follows a set of instructions in order to solve a problem and unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. So obviously we need a person who actually knows how to solve that problem and he or she can provide the instructions to the computer as to how to solve that particular problem. Right? So we first should know the answer to that problem or we should know how to overcome that challenge or problem which is there in front of us. Then only we can provide instructions to the computer. So this restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But what about those problems whose answer we have no clue of. So that's where our traditional approach was a failure. So that's why neural networks were introduced. Now let us see what was the scenario after neural networks. So neural networks basically process information in a similar way the human brain does. And these networks they actually learn from examples. You cannot program them to perform a specific task. They will learn from their examples from their experience. So you don't need to provide all the instructions to perform a specific task and your network will learn on its own with its own experience. All right. So this is what basically neural network does. So even if you don't know how to solve a problem, you can train your network in such a way that with experience it can actually learn how to solve the problem. So that was a major reason why neural networks came into existence. So we'll move forward and we'll understand what is the motivation behind neural networks. So these neural networks are basically inspired by neurons which are nothing but your brain cells and the exact working of the human brain is still a mystery though. So as I've told you earlier as well that neural networks work like human brain and so the name and similar to a newborn human baby as he or she learns from his or her experience we want a network to do that as well but we wanted to do very quickly. So here's a diagram of a neuron. Basically a biological neuron receives input from other sources combines them in some way perform a generally nonlinear operation on the result and then outputs the final result. So here if you notice these dendrites will receive signals from the other neurons. Then what will happen? It will transfer it to the cell body. The cell body will perform some function. It can be summation can be multiplication. So after performing that summation on the set of inputs via exxon it is transferred to the next neuron. Now let's understand what exactly are artificial neural networks. It is basically a computing system that is designed to simulate the way the human brain analyzes and process the information. Artificial neural networks has self-arning capabilities that enable it to produce better results as more data becomes available. So if you train your network on more data, it'll be more accurate. So these neural networks, they actually learn by example. And you can configure your neural network for specific applications. It can be pattern recognition or it can be data classification, anything like that. All right. So because of neural networks we see a lot of new technology has evolved from translating web pages to other languages to having a virtual assistant to order groceries online to conversing with chat bots. All of these things are possible because of neural networks. So in a nutshell if I need to tell you artificial neural network is nothing but a network of various artificial neurons. All right. So let me show you the importance of neural network with two scenarios before and after neural network. So over here we have a machine and we have trained this machine on four types of dogs as you can see where I'm highlighting with my cursor and once the training is done we provide a random image to this particular machine which has a dog but this dog is not like the other dogs on which we have trained our system on. So without neural networks our machine cannot identify that dog in the picture as you can see it over here. Basically our machine will be confused. It cannot figure out where the dog is. Now when I talk about neural networks, even if we have not trained our machine on this specific dog, but still it can identify certain features of the dogs that we have trained on and it can match those features with the dog that is there in this particular image and it can identify that dog. So this happens all because of neural networks. So this is just an example to show you how important are neural networks. Now I know you all must be thinking how neural networks work. So for that we'll move forward and understand how it actually works. So over here I'll begin by first explaining a single artificial neuron that is called as perceptron. So this is an example of a perceptron. Over here we have multiple inputs x1 x2 dash till xn and we have corresponding weights as well. W1 for x1 w2 for x2 similarly wn for xn. Then what happens? We calculated the weighted sum of these inputs. And after doing that we pass it through an activation function. This activation function is nothing but it provides a threshold value. So above that value my neuron will fire else it won't fire. So this is basically an artificial neuron. So when I talk about a neural network it involves a lot of these artificial neurons with their own activation function and their processing element. Now we'll move forward and we'll actually understand various modes of this perceptron or single artificial neuron. So there are two modes in a perceptron. One is training, another is using mode. In training mode, the neuron can be trained to fire for particular input patterns. Which means that we'll actually train our neuron to fire on certain set of inputs and to not fire on the other set of inputs. That's what basically training mode is. When I talk about using mode, it means that when a tot input pattern is detected at the input, its associated output becomes the current output. Which means that once the training is done and we provide an input on which the neuron has been trained on so it'll detect the input and we'll provide the associated output. So that's what basically using mode is. So first you need to train it then only you can use your perceptron or your uh network. So these were the two modes guys. Next up we'll understand what are the various activation functions available. So these are the three activation functions although there are many more but I've listed down three step function. So over here the moment your input is greater than this particular value your neuron will fire else it won't. Similarly for sigmoid and sine function as well. So these are three activation functions. There are many more that I've told you earlier as well. So these are the three majorly used activation functions. Next up what we are going to do we are going to understand how a neuron learns from its experience. So I'll give you a very good analogy in order to understand that and later on when we talk about neural networks or you can say multiple neurons in a network I'll explain you the maths behind it. math behind learning how it actually happens. So right now I'll explain you with an analogy and guys trust me that analogy is pretty interesting. So I know all of you must have guessed it. So these are two beer mugs and all of you who love beer can actually relate to this analogy a lot and I know most of you actually love beer. So that's why I've chosen this particular analogy so that all of you can relate to it. All right, jokes apart. So fine guys, so there's a beer festival happening near your house and you want to badly go there. But your decision actually depends on three factors. First is how is the weather, whether it is good or bad. Second is your wife or husband is going with you or not. And the third one is any public transport is available. So on these three factors, your decision will depend whether you'll go or not. So we'll consider these three factors as inputs to our perceptron and we'll consider our decision of going or not going to the beer festival as our output. So let us move forward with that. So the first input is how is the weather? We'll consider it as x1. So when weather is good it'll be one and when it is bad it'll be zero. Similarly your wife is going with you or not. So that be your x2. If she is going then it's one. If she's not going then it's zero. Similarly for public transport if it is available then it is one else it is zero. So these are the three inputs that I'm talking about. Let's see the output. So output will be one when you're going to the beer festival and output will be zero when you want to relax at home. You want to have beer at home only. You don't want to go outside. So these are the two outputs whether you are going or you're not going. Now what a human brain does over here. Okay fine I need to go to the beer festival. But there are three things that I need to consider. But will I give importance to all these factors equally? Definitely not. there'll be certain factors which will be of higher priority for me. I'll focus on those factors more whereas few factors won't affect that much to me. All right. So let's prioritize our inputs or factors. So here our most important factor is weather. So if weather is good, I love beer so much that I don't care even if my wife is going with me or not or if there is a public transport available. So I love

### [3:48:53](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=13733s) Recurrent Neural Networks

beer that much that if weather is good that definitely I'm going there. That means when x1 is high output will be definitely high. So how we do that? How we actually prioritize our factors or how we actually give importance more to a particular input and less to another input in a perceptron or in a neuron. So we do that by using weights. So we assign high weights to the more important factors or more important inputs and we assign low weights to those particular inputs which are not that important for us. So let's assign weights guys. So weight w is associated with input x1, w2 with x2 and similarly w3 with x3. Now as I've told you earlier as well that weather is a very important factor. So I'll assign a pretty high weight to weather and I'll keep it as six. Similarly W2 and W3 are not that important. So I'll keep it as 22. After that I've defined a threshold value as five which means that when the weighted sum of my input is greater than five then only my neuron will fire or you can say then only I'll be going to the BF festival. All right. So I'll use my pen and we'll see what happens when weather is good. So when weather is good our x1 is one. Our weight is six. We'll multiply it with six. Then if my wife decides that she is going to stay at home and she will probably be busy with cooking and she doesn't want to drink beer with me, so she's not coming. So that input becomes zero. 0 into 2 will actually make no difference because it'll be zero only. Then again there is no public transport available also. Then also this will be 0 into 2. So what output I get here? I get here as six and notice the threshold value it is five. So definitely six is greater than five that means my output will be one or you can say my neuron will fire or I'll actually go to the beer festival. So even if these two inputs are zero for me that means my wife is not willing to go with me and there is no public transport available but weather is good which has very high weight value and it actually matters a lot to me. So if that is high, it doesn't really matter whether the two inputs are high or not. I will definitely go to the BF festival. All right. Now I'll explain you a different scenario. So over here our threshold was five. But what if I change this threshold to three. So in that scenario, even if my weather is not good, uh I'll give it the zero. So 0 into 6. But my wife and public transport both are available. All right. So 1 into 2 + 1 into 2 which is equal to 4 and it is definitely greater than three. Then also my output will be one that means I will definitely go to the beer festival even if weather is bad and my neuron will fire. So these are the two scenarios that I have discussed with you. All right. So there can be many other ways in which you can actually assign weight to your uh problem or to your learning algorithm. So these are the two ways in which you can assign weights and prioritize your inputs or factors on which your output will depend. So obviously in real life all the inputs or all the factors are not as important for you. So you actually prioritize them and how you do that in perceptron you provide high weight to it. This is just an analogy so that you can relate to a perceptron to a real life. We'll actually discuss the maths behind it later in the session as to how a network or a neuron learns. All right? So how the weights are actually updated and how the output is changing that all those things we'll be discussing later in the session. But my aim is to make you understand that you can actually relate to a real life problem with that of a perceptron. All right? And in real life problems are not that easy. They are very complex problems that we actually face. So in order to solve those problems a single neuron is definitely not enough. So we need networks of neuron and that's where artificial neural network or you can say multi-layer perceptron comes into the picture. Now let us discuss that multi-layer perceptron or artificial neural network. So this is how an artificial neural network actually looks like. So over here we have multiple neurons in present in different layers. The first layer is always your input layer. This is where you actually feed in all of your inputs. Then we have the first hidden layer. Then we have second hidden layer. And then we have the output layer. Although the number of hidden layers depend on your application on what are you working what is your problem. So that actually determines how many hidden layers you'll have. So let me explain you what is actually happening here. So you provide in some input to the first layer which is nothing but your input layer. You provide inputs to these neurons. All right? And after some function the output of these neurons will become the input to the next layer which is nothing but your hidden layer one. Then these hidden layers also have various neurons. These neurons will have different activation functions. So they'll perform their own function on the inputs that it receives from the previous layer. And then the output of this layer will be the input to the next hidden layer which is hidden layer 2. Similarly the output of this hidden layer will be the input to the output layer. And finally we get the output. So this is how basically an artificial neural network looks like. Now let me explain you this with an example. So over here I'll take an example of image recognition using neural networks. So over here what happens? We feed in a lot of images to our input layer. Now this input layer will actually detect the patterns of local contrast and then we'll feed that to the next layer which is hidden layer 1. So in this hidden layer one the face features will be recognized. We'll recognize eyes, nose, ears, things like that. And then that will be again fed as input to the next hidden layer. And in this hidden layer we'll assemble those features and we'll try to make a face and then we'll get the output that is the face will be recognized properly. So if you notice here with every layer we are trying to get a more abstract version or the generalized version of the input. So this is how basically an artificial neural network how it works. All right and there's a lot of training and learning which is involved that I'll show you now training a neural network. So how we actually train basically the most common algorithm for training a network is called back propagation. So what happens in back propagation after the weighted sum of inputs and passing through an activation function and getting the output we compare that output to the actual output that we already know. We figure out how much is the difference. We calculate the error and based on that error what we do we propagate backwards and we'll see what happens when we change the weight. Will the error decrease or will it increase? And if it increases when it increases by increasing the value of the variables or by decreasing the value of variables. So we kind of calculate all those things and we update our variables in such a way that our error becomes minimum and it takes a lot of iterations. Trust me guys, We get output a lot of times and then we compare it with the model with the actual output. Then again we propagate backwards. We change the variables. Then again we calculate the output. We compare it again with the desired output of the actual output. So this process keeps on repeating until we get the minimum value. All right. So there's an example that is there in front of your screen. Don't be scared of the terms that I used. I'll actually explain you with an example. So this is the example over here. We have 0 1 and two as inputs. And our desired output or the output that we already know is 0 1 and four. All right. So over here we can actually figure out that desired output is nothing but twice of your input. But I'm training a computer to do that. Right? The computer is not a human. So what happens? I actually initialize my weight. I keep the value as three. So the model output will be 3 into 0 is 0. 3 into 1 is 3. 3 into 2 is 6. Now obviously it is not equal to your desired output. So we check the error. Now the error that we have got here is 0 1 and 2 which is nothing but your difference. So 0 - 0 is 0. 3 - 2 is 1. 6 - 4 is 2. Now this is called an absolute error. After squaring this error, we get square error which is nothing but 0 1 and 4. All right. So now what we need to do? We need to update the variables. We have seen that the output that we got is actually different from the desired output. So we need to update the value of the weight. So instead of three, our computer makes it as four. After making the value as four, we get the model output as 0 4 and 8. And then we saw that the error has actually increased. Instead of decreasing, the error has increased. So after updating the variable, you can see that square error is now 0 4 and 16. And earlier it was 0 1 and 4. That means we cannot increase the weight value right now. But if we decrease that make it as two we get the output which is actually equal to desired out. But is it always the case that we need to only decrease the weight? Definitely not. So in this particular scenario whenever I'm increasing the weight error is increasing and when I'm decreasing decreasing. But as I've told you earlier as well this is not the case every time. Sometimes you need to increase the weight as well. So how we determine that? All right fine guys. This is how basically a computer decide whether it has to increase a weight or decrease away. So what happens here? This is a graph of square error versus weight. Suppose your square error is somewhere here and your computer it starts increasing the weight in order to reduce the square error and it notices that whenever it increases the weight square error is actually decreasing. So it'll keep on increasing until the square error reaches a minimum value and after that when it tries to still increase the weight the square error will increase. So at that time our network will recognize that whenever it is increasing the weight after this point error is increasing. So therefore it will stop right there and that will be our weight value. Similarly there can be one more scenario. Suppose if we increase the weight but then also the square error is increasing. So at that time we cannot increase the weight. At that time computer will realize okay fine whenever I'm increasing the weight the square error is increasing. So it'll go in the opposite direction. So it'll start decreasing the weight and it'll keep on doing that until the square error becomes minimum. And the moment it decreases more the square again increases. So our network will load that whenever it decreases the weight value the square error is increasing. So that point will be our final weight value. So guys this is what basically back propagation in a nutshell is fine. So we'll move forward and now is the correct time to understand how to implement the use case that I was talking about in the beginning that is how to determine whether a node is fake or real. So for that I'll open my PyCharm. This is my PyCharm again guys. Uh let me just close this. All right. So this is the code that I've written in order to implement the use case. So over here what we do we import the first important libraries which are required. Mattplot lab is used for visualization. TensorFlow we know in order to implement the neural networks. Numpy for arrays. Pandas for reading the data set. Similarly sklearn for label encoding as well as for shuffing and also to split the data set into training and testing task. So we'll begin by first reading the data set as I've told you earlier as well when I was explaining the steps. So what I'll do I'll use pandas in order to read the CSV file which has the data set. After that I'll define features and labels. So x will be my feature and y will contain my label. So basically x includes all the columns apart from the last column which is the fifth one. And because the indexing starts from zero that's why we have written 0 till fourth. So it won't include the fourth column. All right. And so our last column will actually be our label. Then what we need to do, we need to encode the dependent variable. So the dependent variable as I've told you earlier as well is nothing but your label. So I've discussed encoding in TensorFlow tutorial. You can go through it and you can actually get to know why and how we do that. Then what we have done, we have read the data set. Then what we need to do is to split our data set into training and testing. And these are all optional steps. You can print the shape of your training and test data. If you don't want to do it, you still fine. Then we have defined learning rate. So learning rate is actually the steps in which the weights will be updated. All right. So that is what basically learning rate is. Then when we talk about epoch means iterations. Then we have defined cost history that will be an empty numpy array and it shape will be one and it'll include the flow type objects. Then we have defined end which is nothing but your x shape of axis one which means your column. Then we'll print that. After that we have defined the number of classes. So there can be only two class whether the node can be fake or it can be real and this model path I've given in order to save my model. So I've just given a path where I need to save it. So I'll just save it here only in the current working directory. Now is a time to actually define our neural network. So we'll first make sure that we have defined the important parameters like hidden layers, number of neurons in hidden layers. So I'll take 10 neurons in every hidden layer and I'm taking four layers like that. Then x will be my placeholder and the shape of this particular placeholder is none, n dim. N dim value. I'll get it from here and none can be any value. I'll define one variable w and I'll initialize it with zeros and this will be the shape of my weight. Similarly for bias as well. This will be the particular shape and there will be one more placeholder ydash which will actually be used in order to provide us with the actual output of the model. They'll be our model output and they'll be our actual output which we use in order to calculate the difference. Right? So we'll feed in the actual values of the labels in this particular placeholder ydash. And now we'll define the model. So over here we have named the function as multi-layer perceptron. And in it we'll first define the first layer. So the first hidden layer and we are going to name it as layer_1 which will be nothing but the matrix multiplication of x and weights of h1 that is the hidden layer 1 and that'll be added to your biases b1 after that we'll pass it through a sigmoid activation function. Similarly in layer 2 as well matrix multiplication of layer 1 and weights of h2. So if you can notice layer 1 was the network layer just before the layer two right. So the output of this layer 1 will become input to the layer 2 and that's why we have written layer_1 it'll be multiplied by weight h2 and then we'll add it with the bias. Similarly for this particular head layer as well and this particular layer as well but over here we are going to use rail activation function instead of sigmoid. Then we are going to define the weights and biases. So this is how we basically define weights. So weights h1 will be a variable which will be a truncated normal with the shape of n dim and n hidden_1. So these are nothing but your shapes. All right. And after that what we have done we have defined biases as well. Then we need to initialize all the variables. So all these things actually I've discussed in brief when I was talking about tensorflow. So you can go through tensorflow tutorial at any point of time if you have any question. We have discussed everything there. Since in tensorflow we need to initialize the variables before we use it. So that's how we do it. We first initialize it and then we need to run it. That's when your variables will be initialized. After that we are going to create a saver object and then finally I'm going to call my model. And then comes the part where the training happens. Cost function. Cost function is nothing but you can say an error that will be calculated between the actual output and the model output. All right. So y is nothing but our model output and ydash is nothing but actual output or the output that we already know. All right. And then we are going to use a gradient descent optimizer to reduce error. Then we are going to create a session object as well. And finally what we are going to do we are going to run the session. So this is how we basically do that. For every epoch we will be calculating the change in the error as well as the accuracy that comes after every epoch on the training data. After we have calculated the accuracy on the training data, we going to plot it for every epoch how the accuracy is. And after plotting that we're going to print the final accuracy which will be on our test data. So using the same model we'll make prediction on the test data and after that we going to print the final accuracy and the mean squared error. So let's go ahead and execute this guys. All right. So training is done and this is the graph we have got for accuracy versus epoch. This is accuracy. Y-axis represents accuracy whereas this is epoch. We have taken 100 epochs and our accuracy has reached somewhere around 99%. So with every epoch it is actually increasing apart from a couple of instances it is actually keep on increasing. So the more data you train your model on it'll be more accurate. Let me just close it. So now the model has also been saved where I wanted it to be. This is my final test accuracy and this is the mean squared error. All right. So these are the files that will appear once you save your model. These are the four files that I've highlighted. Now what we need to do is restore this particular model and I've explained this in detail how to restore a model that you have already saved. So over here what I'll do I'll take some random range. I've taken it actually from 754 to 768. So all the values in the row of 754 and 768 will be fed to our model and our model will make prediction on that. So let us go ahead and run this. So when I'm restoring my model, it seems that my model is 100% accurate for the values that I have fed in. So whatever actually given as input to my model, it has correctly identified its class whether it's a fake node or a real node because zero stands for fake node and one stands for real node. Okay. So original class is nothing but which is there in my data set. So it is zero already and what prediction my model has made is zero. That means it is fake. So accuracy becomes 100%. Similarly for other values as well. Fine guys. So this is how we basically implement the use case that we saw in the beginning. So in the slide you can notice that I've listed out only two applications although there are many more. So neural networks in medicine. Artificial neural networks are currently a very hot research area in medicine and it is believed that they will receive extensive application to biomedical systems in the next few years and currently the research is mostly on modeling parts of human body and recognizing diseases from various scans. For example, it can be cardiograms, CAT scans, ultrasonic scans etc. And currently the research is going mostly on two major areas. First is modeling and diagnosing the cardiovascular system. So neural networks are used experimentally to model the human cardiovascular system. Diagnosis can be achieved by building a model of the cardiovascular system of an individual and comparing it with the real-time physiological measurements taken from the patient. And trust me guys, if this routine is carried out regularly, potential harmful medical conditions can be detected at an early stage and thus make the process of combating disease much easier. Apart from that it is currently being used in electronic noses as well. Electronic noses have several potential applications in tele medicine. Now let me just give you an introduction to tele medicine. Tele medicine is a practice of medicine over long distance via a communication link. So what the electronic noses will do? They would identify odors in the remote surgical environment. These identified odors would then be electronically transmitted to another site where an door generation system would recreate them. Because the sense of the smell can be an important sense to the surgeon. Teley smell would enhance teleresent surgery. So these are the two ways in which you can use it in medicine. business as well guys. So business is basically a diverted field with several general areas of specialization such as accounting or financial analysis. Almost any neural network application would fit into one business area or financial analysis. Now there is some potential for using neural networks for business purposes including resource allocation and scheduling. I've listed down two major areas where it can be used. One is marketing. So there is a marketing application which has been integrated with a neural network system. The airline marketing tactician is a computer system made of various intelligent technologies including expert systems. A feed forward neural network is integrated with the AMT which is nothing but airline marketing tactician and was trained using back propagation to assist the marketing control of airline seat or locationation. So it has wide applications in marketing as well. Now the second area is credit evaluation. Now I'll give you an example here. The HNC company has developed several neural network applications and one of them is a credit scoring system which increases the profitability of existing model up to 27%. So these are few applications that I'm telling you guys neural network is actually the future. People are talking about neural networks everywhere and especially after the introduction of GPUs and the amount of data that we have now neural network is actually spreading like plague right now. So let's understand how exactly a computer reads an image. So this is an image of New York skyline. I personally love this picture. So when a human will see this image, he'll first notice there are a lot of buildings and different colors and stuff like that. But how a computer will read this image? So basically there will be three channels. One will be red, another will be green and finally we have blue channel which is popularly known as RGB. So each of these channels will they have their own respective pixel values as you can see it over here. So when I say that image size is B cross A + 3 it means that there are B rows A columns and three channels. All right. So if somebody tells you that the size of an image is 28 + 3 pixels it means that it has 28 rows 28 columns and three channels. So this is how a computer sees an image. And this is for colored images. For black and white images we have only two channels. So let's move forward and we'll see why can't we use fully connected networks for image classification. So consider an image which has 28 + 28 cross 3 pixels. So when I feed in this image to a fully connected network like this then the total number of weights required in the first hidden layer will be 2352. You can just go ahead and multiply it yourself. All right. But in real life the images are not that small. All right. So whatever images that we have they are definitely above 200 + 3 pixels. So if I take an image which has 200 cross 3 pixels and I feed it to a fully connected network. So at that time the number of weights required at the first hidden layer itself will be 120,000 guys. So we need to deal with such huge amount of parameters and obviously we require more number of neurons. So that can eventually lead to overfitting. So that's why we cannot use fully connected network for image classification. Now let's see why we need convolutional neural networks. So basically in convolutional neural network a neuron in the layer will only be connected to a small region of the layer before it. So if you consider this particular neuron which I'm highlighting right now is only connected to three other neurons unlike the fully connected network where this particular neuron will be connected to all these five neurons. Because of this we need to handle less amount of weights and in turn we need less number of neurons as well. So let us understand what exactly is convolutional neural network. So convolutional neural networks are special type of feed forward artificial neural networks which is inspired from visual cortex. So visual cortex is nothing but a small region in our brain which is present somewhere here where you can see the bulb and basically what happened there was an experiment conducted and people got to know that visual cortex is small regions of cells that are sensitive to specific regions of visual field. So what I mean by that is for example some neurons in the visual cortex fires when exposed to vertical edges. Some will fire when exposed to horizontal diagonal edges. And that is nothing but the motivation behind convolutional neural network. So now let us understand how exactly a convolutional neural network works. So generally has three layers. Convolution layer, relu layer, pooling layer and fully connected layer. We'll understand each of these layers one by one. We'll take an example of a classifier that can classify an image of an X as well as an O. So with this example we'll be understanding all these four layers. So let's begin guys. Now there are certain trickier cases. So what I mean by that is X can be represented in these four forms as well. Right? So these are nothing but the deformed images of X. Similarly for O as well. So these are deformed images. So even I want to classify these images either X or O. All right? Because even this is X, this is X, this is X. But all these are deformed images but they are in turn X. Right? So I want my classifier to classify them as X. So basically that's what I want. So if you can notice here this is a proper image of an X and which is actually equal to this particular X which is a deformed image. Same goes for this O as well. So now what we are going to do is we know that a computer understands an image using numbers at each pixels. So what we'll do whatever the white pixels that we have we are going to assign a value minus one to it and whatever black pixels we have we are going to assign a value one to it. When we use normal techniques to compare these two images one is a proper image of X and another is a deformed image of X. We got to know that a computer is not able to classify the deformed image of X correctly. Why? Because it is comparing it with the proper image of X. Right? So when you go ahead and add the pixel values of both of these images, you get something like this. So basically our computer is not able to recognize whether it is an X or not. Now what we do with the help of CNN, we take small patches of our image. So these patches or these uh pieces are known as nothing but features or filters. So what we do by finding rough feature matches in roughly the same positions in two images, CNN gets a lot better at seeing the similarity between the whole image matching schemes. What I mean by that is we have these filters, right? We have these filters that you can see. So consider this first filter. This is exactly equal to the feature or the part of the image in the deformed image as well. So this is our proper image and this is our deformed image. All right. So this particular filter or this particular part of the image is actually equal to image. Same goes for this particular feature of filter as well. And similarly, we have this filter as well which is actually equal to this particular part of the uh deformed image. All right. So let's move forward and we'll see what all features that we'll be taking in our example. So we'll be considering these three features or filters. This is a diagonal filter. This is again a diagonal filter and this is nothing but a small. So we'll take these three filters and we'll move forward. So what we are going to do is we are going to compare these features the small pieces of the bigger image. We are going to put it on the input image and if it matches then the image will be classified correctly. Now we'll begin guys. The first layer is convolution layer. So these are the beginning two steps of this particular layer. First we need to line up the feature on the image and then multiply image by the corresponding feature pixel. Now let me explain you with an example. So this is our first diagonal feature that we'll take. We are going to put this particular feature on our image of X. All right. And we going to multiply the corresponding pixel value. So one will be multiplied with one. we'll get one and we'll put it in another matrix. Similarly, we are going to move forward and we're going to multiply minus1 with minus1. minus1 as you can see. Similarly, we multiply this result -1 into -1. Then again -1 into -1. So, we are going to complete this whole process when we going to finish up this matrix. All right. And once we are done finishing up uh the multiplication of all the corresponding pixels in the feature as well as in the image, we need to follow two more steps. We need to add them up and divide by the total number of the pixels in the feature. So what I mean by that is after the multiplication of the corresponding pixel values what we do we add all these values we divide it by the total number of pixels and we get some value right and then now our next step is to create a map and put the value of the filter at that particular place. We saw that after multiplying the pixel value of a feature with the corresponding pixel value of with that of our image we get the output which is one. So we place one here. Similarly we are going to move this filter throughout the image. Next up, here. After that, we are going to move it here, here, everywhere on the image. We are going to move it and we going to follow the same process. All right. So, yeah, this is one more example where I've moved my filter in between. And after doing that, I've got the output something like this 1 one minus one and all. So, over here, if you notice, I've got couple of times minus one as well. Due to which my output that comes is 055, right? So, I'm going to place 0. 55 here. Similarly, after moving the pixel, after moving the filter throughout the image, I got this particular matrix. All right? And this is for one particular feature. After performing the same process for the other two filters as well, I've got these two values. So, we have these three values after passing through the convolution layer. Let me give you a quick recap of what happens in convolution layer. So, basically we have taken three features. All right? And one by one we'll take one feature move it through the entire image and when we are moving it at that time we are multiplying the pixel value of the image with that of the corresponding pixel value of the filter adding them up dividing by the total number of pixels to get the output. So when we do that for all the filters we get we got these three outputs. All right. So let's move forward and we'll see what happens in RU layer. So this is RLU layer guys and people who have gone through the previous tutorial actually know what it is. So let me just give you a quick introduction of RLU layer. So RLU is nothing but an activation function. All right. Right? So what I mean by that is it will only activate a node if the input is above a certain quantity. While the input is below zero, the output is also zero. All right? And when the input rises above the certain threshold, it has a linear relationship with the dependent variable. Now I'll explain you with an example. We have a graph of Reo function here. So my function says that when f ofx is equal to 0, if x is less

### [4:17:44](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=15464s) Convolutional Neural Network

than zero and it is equal to x when x is greater than zero. All right. So whatever values that I have which are below zero will actually in turn become zero and whatever values that are above zero our function value will also be equal to that particular value. So f of x will be equal to x if it is greater than or equal to 0 and it will be zero if it is less than 0. So if I have x value is minus 3 so definitely it is less than 0. So f of x becomes zero. Similarly if I have minus 5 x value then again it is less than 0. So my f of x value becomes zero. But when I consider three as my x value then my f of x becomes equal to x which is nothing but three. So over here I'll have three. Again if I take my x value as five then obviously it is greater than or equal to zero. Then my f of x becomes equal to x. So my f of x value becomes five. So this is how a relu function works. So why are we using relu function here is we want to remove all the negative values from our output that we got through the convolution layer. So we'll only take the first output that we got by moving one feature throughout the image. So uh this is the output that we have got for only one filter. All right. So over here I'm going to remove all negative values. So over here you can see that it was minus. 11 before and I've converted that to zero. Similarly I'm going to repeat the whole process for the entire matrix and once I'm done with that I get this particular value. Now remember this is only for the output that we got through one feature. All right. So when we were doing convolution at that time we were using three features right? So this is output only for one filter. After doing it for the output of the other two filters as well, we have got these two values more. So totally we have these three values after passing through ReLU activation function. Next up we'll see what exactly is pooling layer. So in pooling layer what we do? We take a window size of two and we move it across the entire matrix that we have got after passing through layer and we take only the maximum value from there so that we can shrink the image. So what we are actually doing is we are reducing the size of our image. So let me explain you with an example. So this is basically one output that we have got after passing through relu layer. And over here we have taken a window size of 2 +2. So when we keep this window at this particular position, we see that one is the highest value. So we are going to keep one here. And repeat the same process for this particular window as well. So over here the maximum value is 33. So 33 will come. So if you notice here earlier we had 7 + 7 matrix and now we have reduced that to 4 + 4 matrix. So after doing that for the entire image we have got this as our output. This output we have got after moving our window throughout the image that we have got after passing through relu layer. Right? And when we repeat this process for all the three outputs that we have got after the relu layer then we get this particular output after pooling layer. Right? So basically we have shrink our image to a 4 + 4 matrix. Now comes the tricky part. So what we are going to do now is stack up all these layers. So we have discussed convolution layer, relu layer and pooling layer. So I'll just give you a brief recap of what all things we have discussed in convolution layer. What we did we took three features and then after that one by one we moved each filter throughout the image and when we were moving it we continuously multiplying the image pixel value with that of the corresponding filter pixel value and then we were dividing it by the total number of pixels. All right with that we got three output after passing through the convolution layer. Then those three output we pass through a ReLU layer where we have removed the negative value. All right. And after removing negative value again we have got the three outputs. Then those three outputs we passed through pooling layer. So basically we're trying to shrink our image. And what we did we took a window size of 2 +2 moved it through all the three outputs that we have got through ReLU layer. And after doing that we were only taking the maximum value pixel value in that particular window and then we were putting it in a different matrix so that we get a shrink image. And after passing it through pooling layer, we have got a 4 + 4 matrix. And since we took three features in the beginning, so therefore we have got three outputs after passing through pooling layer. All right. Next up, we are going to stack up all the layers. All right. So let's do that. So after passing through convolution, rail and pooling, we have got this 4 + 4 matrix. This was our input image. Now when we add one more layer of convolution, RLU and pooling, we have shrked our image from 4 + 4 to 2 +2. As you can notice here, now we are going to use fully connected layer. Now what happens in fully connected layer? The actual classification happens here guys. Okay. So what we are doing here is we are going to take the shr images and put it into a single list. So basically this is what we have got after passing through two layers of convolution rail and pooling. And this is what we have got. So basically we're converting into a single list or a vector. How we do that? We take the first value one. Then we take 0. 55. Then we take one again. Then we take one. 55. Then we again take 0. 55. 1 and 0. 55. So this is nothing but a vector or you can say a list. If you notice here that there are certain values in my list which is high for X. And similarly if I repeat the entire process that we have discussed for O, there'll be certain different values which will be high. So for X we have first, fourth, fifth, 10th and 11th element vector values are high. For O we have second, third, 9th and 12th element vector which are high. So basically we know now if we have an input image which has first, fourth, fifth, 10th and 11th element vector values high, we know that we can classify it as X. Similarly, if our input image has a list which has the second, third, 9th and 12th element vector values high, then we can classify it as zero. Now let me explain you with an example. So after the training is done, after the after doing the entire process for both X and O, you know that our model is trained now. Okay. So we have given one a new input image and that input image passes through all the layers and once it has passed we have got this 12 element vector. Now it has 0. 9 65 all these values. Right? Now how do we classify it whether it is an X or O. So what we do we'll compare this with a list of X and O. Right? So we have got the list in the previous slide. If you notice we have got two different lists for X and O. We are going to compare this new input image list that we have got with that of X and O. Right? So first let us compare that with X. Now as I've told you earlier as well for X there are certain values which will be high which is nothing but 1, fourth, fifth, 10th and 11th value. Right? So I'm going to sum first, fourth, fifth, 10th and 11th value and I've got five. 1 + 1 + 1 and + 1. So 5 * 1 I've got five. And now I'm going to sum the corresponding values of my input image vector as well. So the first value is 0. 9. Then the fourth value is 87. Fifth value is 96. 10th value is 89. and the 11th value is 0. 94. So after this doing the sum of these values, I've got 4. 56. When I divide this by five, I got 0 91. Right? Now this is for X. Now when I do the same process for O. So in O if you notice I have 2, 3, 9th and 12th element vector values as high. So when I sum these values I get four. And when I do the sum of the corresponding values in my input image, I've got 2. 07. When I divide that by four, I got 4. 51. And so now we notice that 911 is a higher value compared to 0 51. So we have when we have compared our input image with the values of x we got a higher value than the value that we have got after comparing the input image with the values of four. So the input image is classified as x. All right. So now let us move towards our use case. So this is our use case guys. So over here what we are going to do is we are going to train our model on different types of dogs and cat images. And uh once the training is done, we are going to provide it an input and it'll classify whether the input is of a dog or a cat. Now let me tell you the steps involved in it. So what we are going to do in the beginning is obviously first we need to download the data set. After that we are going to write a function to encode the labels. Labels are nothing but the dependent variable that we are trying to predict. So in our training data and testing data obviously we know the labels right? So on that basis only we can train our model. So we are going to encode those label. After that we'll resize the image to 50 + 50 pixel and we are going to read it as a grayscale image. Then we are going to split the data 24,000 images for training and 50 for testing. Once this is done we are going to reshape the data appropriately for TensorFlow. Now TensorFlow I think everyone knows about TensorFlow. TensorFlow is nothing but a Python library for implementing deep learning models. Then we are going to build the model calculate the loss. It is nothing but categorical cross entropy. Then we are going to reduce the loss by using Adam optimizer with a learning rate set to 01. Then we are going to tra the train the deep neural network for 10 epox. And finally we are going to make predictions. All right. So I'll just quickly open my PyCharm and I'll show you the code how it looks like. So this is the code that I've written in order to implement the use case. In the beginning I need to import the uh libraries that I require. And once it is done what I'm doing I'm defining my training data and the testing data. So train and test one uh contains both my training data as well as testing data respectively. Then I've taken my image size as 50 and learning rate I've defined here and I've given a name to my model. You can give whatever name you want. All right. So first thing that we saw we need to encode the dependent variable. That's what we are doing here. We are encoding our dependent variable. So uh whenever the label is cat then it will be converted to an array of 1 comma 0 and when it is dog it will be converted to an array of 0a 1. So why we actually encoding the label because our code cannot understand the categorical variable. So we need to encode it. Right? Next what I'm doing is I'm resizing my image to 50 cross 50 and I am converting it to a grayscale image. Right? And once this is done I'm going to split my data set into training and testing parts. So yeah we are basically splitting the data set into two parts for training and testing. And here we are defining a model. So you can just I can just go ahead and throw in a comment here. Building the model. Yeah. So this is where we are building the model. So basically what we have done here is we have resized our image to 50 + 1 matrix and that is the size of the input that we are using. Right? This input that I'm talking about. Then what we have done a convolution layer we have defined with 32 filters and a stride of five with an activation function array. And after that we have added a pooling layer max pool layer. Okay. Again what we have done we have repeated the same process but over here we are taking 64 filters and five stride passing it through a rail activation function and after that we have a pooling layer match pool layer. Then we have repeated the same process for 128 filters. After that we have repeated for 64 filters. Then for 32 filters and after that we are using a fully connected layer with 1024 neurons. And finally we are using the dropout layer with key probability of8 to finish our model. So this is where our model is actually finished. And then what we are doing is we are using the atom optimizer to optimize our model. So basically whatever the loss that we have we trying to reduce it and this is basically for your tensorboard. So we creating some log files and then with that log file tensorboard will create a pretty fancy graph for us that helps us to visualize the entire model. And then what we are doing is we are trying to fit the model and we have defined epox as 10 that is the number of iterations that will happen will be 10 and yeah so this is pretty much it model name we have given then input is x test to uh check the accuracy similarly uh the target will be y test the labels associated with that test data will be y test and which we have encoded basically. So this is how we are going to actually calculate the accuracy and we'll try to reduce the loss as much as possible uh in tenno. So till now our model is complete. We are done with it. Next what I'm doing is I'm feeding in some random input from the test data and I'm validating whether my model is predicting it correct or not. All right. So I've already trained the model because it takes a lot of time and yeah I cannot do it here. So I've already trained the model and you can see that the loss that came after the 10th epoch is 2973 and the accuracy is somewhere around 88% which is pretty good guys and yeah and I've done the prediction on the test data as well. So let me just show it to you that. So this is the prediction that it has done on few of the images in the test data. So yeah it is a cat predicted as cat predicted at cat and dogs as well. there are certain dogs as well. Why can't we use feed forward networks? Now let us take an example of a feed forward network that is used for image classification. So we have trained this particular network for classifying various images of animals. Now if you feed in an image of a dog, it will identify that image and will provide a relevant label to that particular image. Similarly, if you feed in an image of an elephant, it will provide a relevant label to that particular image as well. Now, if you notice the new output that we have got that is classifying an elephant has no relation with the previous output that is of a dog or you can say that the output at time t is independent of minus one. As we can see that there is no relation between the new output and the previous output. So, we can say that in feed forward networks outputs are independent to each other. Now there are few scenarios where we actually need the previous output to get the new output. Let us discuss one such scenario. Now what happens when you read a book you'll understand that book only on the understanding of your previous words. All right. So if I use a feed forward network and try to predict the next word in a sentence I can't do that. Why can't I do that? Because my output will actually depend on the previous outputs. But in the feed forward network my new output is independent of the previous outputs. that is output at t + 1 has no relation with output at t minus 2, t minus 1 and at t. So basically we cannot use feed forward networks for predicting the next word in a sentence. Similarly, you can think of many other examples where we need the previous output some information from the previous output so as to infer the new output. This is just one small example. There are many other examples that you can think of. So we'll move forward and understand how we can solve this particular problem. So over here what we have done we have input at t minus one. we'll feed it to our network. Then we'll get the output at t minus one. Then at the next time stamp that is at time t we have input at time t that will be given to our network along with the information from the previous time stamp that is t minus one and that will help us to get the output at t. Similarly at output for t + 1 we have two inputs one is a new input that we give another is the information coming from the previous time stamp that is t in order to get the output at time t + one. Similarly, it can go on. So over here, I have just written a generalized way to represent it. There's a loop where the information from the previous time stamp is flowing. And this is how we can solve this particular challenge. Now let us understand what exactly are recurrent neural networks. So for understanding recurrent neural network, I'll take an analogy. Suppose your gym trainer has made a schedule for you. The exercises are repeated after every third day. Now this is the order of your exercises. First day you'll be doing shoulders. Second biceps. Third day you'll be doing cardio and all these exercises are repeated in a proper order. Now what happens when we use a feed forward network for predicting the exercise today. So we'll provide in the inputs such as day of the week, month of the year and health status. All right. And we need to train our model or our network on the exercises that we have done in the past. After that there'll be a complex voting procedure involved that will predict the exercise for us and that procedure won't be that accurate. So whatever output we'll get won't be as accurate as we want it to be. Now what if I change my inputs and I make my inputs as what exercise I've done yesterday. So if I've done shoulder then definitely today I'll be doing biceps. Similarly if I've done biceps yesterday, today I'll be doing cardio. Similarly if I've done cardio yesterday, today I'll be doing shoulder. Now there can be one scenario where you are unable to go to gym for one day due to some personal reasons you could not go to the gym. Now what will happen at that time? We'll go one time stamp back and we'll feed in what exercise that happened day before yesterday. So if the yesterday was shoulder then yesterday there were biceps exercises. All right. Similarly biceps happened day before yesterday then yesterday would have been cardio exercises. Similarly if cardio would have happened day before yesterday would have been shoulder exercises. All right. And this prediction the prediction for the exercise that happened yesterday will be fed back to our network and these predictions will be used as inputs in order to predict what exercise will happen today. Similarly, if you have missed your gym, say for 2 days, 3 days or 1 week. So, you need to roll back. You need to go to the last day when you went to the gym, you need to figure out what exercise you did on that day. Feed that as an input and then only you'll be getting the relevant output as to what exercise will happen today. Now, what I'll do, I'll convert these things into a vector. Now, what is a vector? Vector is nothing but a list of numbers. All right? So, this is the new information guys along with the information from the prediction at the previous time step. So we need both of these in order to get the prediction at time t. Imagine if I have done shoulder exercises yesterday. So this will be one, this will be zero and this will be zero. Now the prediction that will happen will be biceps exercise because if I have done shoulder yesterday, today it will be biceps. So my output will be 0 1 and zero. And this is how vectors work guys. So I hope you have understood this guys. Now this is how a neural network looks like guys. We have new information along with the information from the previous time stamp. The output that we have got in the previous time stamp will uh use certain information from that will feed into our network as inputs and then that will help us to get the new output. Similarly, this new output that we have got will take some information from that feed in as an input to our network along with the new information to get the new prediction and this process keeps on repeating. Now let me show you the math behind the recurrent neural networks. So this is the structure of a recurrent neural network guys. Let me explain you what happens here. Now consider at time t equals to0 we have input x and we need to figure out what is hnot. So according to this equation h of0 is equal to w i weight matrix multiplied by input x of0 plus w into h of 0 - 1 which is h of minus1 and time can never be negative. So we this particular equation cannot be applied here plus a bias. So wi into x of0 plus bh pass passes through a function g of h to get h of 0 over here. After that I want to calculate y not. So for y not I'll multiply h of 0 with the weight matrix wi and I'll add a bias to it and pass it through a function g of i to get y. Now in the next time stamp that is at time t equals to 1 things become a bit tricky. Now let me explain you what happens here. So at time t equals to 1 I have input x1. I need to figure out what is h1. So for that I'll use this equation. So I'll multiply wi that is a weight matrix by the input x1 plus w into h of 1 - 1 which is 0. h of 0 we know what we got from here. So w into h of 0 plus the bias pass it through a function g of h to get the output as h1. Now this h1 we'll use to get y1. We'll multiply h1 with wy plus a bias and we'll pass it through a function g of y to get y1. Similarly, the next time stamp that is at time t= to 2, we have input x2. We need to figure out what will be h2. So, we'll multiply the weight matrix wi with x of 2 plus w into h of 1 that we have got here plus b of h and pass it through a function g of h to get h of 2. From h of2, we'll calculate y of 2. Wy into h of 2 plus y that is the bias. Pass it through a function g of y to get y. And this is how recurrent neural network works guys. Now you must be thinking how to train a recurrent neural network. So a recurrent neural network uses back propagation algorithm for training. But back propagation happens for every time stamp. That is why it is commonly called as back propagation through type. And I've discussed back propagation in detail in artificial neural network tutorial. So you can go through that over here. I won't be discussing back propagation uh in detail. I'll just give you a brief introduction of what it is. Now with back propagation there are certain issues namely vanishing and exploding gradients. Let us see those one by one. So in vanishing gradient what happens when you use back propagation you tend to calculate the error which is nothing but the actual output that you already know minus the model output that you got through your model and the square of that. So you figure out the error with that error. What do you do? You tend to find out the change in error with respect to change in weight or any variable. So we'll call it weight here. So change of error with respect to weight multiply it

### [4:38:19](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=16699s) Introduction to TensorFlow

by learning rate will give you the change in weight. Then you need to add that change in weight to the old weight to get the new weight. All right. So obviously what we are trying to do we trying to reduce the error. So for that we need to figure out what will be the change in error if my variables are changed. Right? So that way we can get the change in the variable and add it to our old variable to get the new variable. Now over here what can happen if the value de by dw that is a gradient or you can say the rate of change of error with respect to our variable weight becomes very small than one like it is 0 something. So if you multiply that with the learning rate which is definitely smaller than one then you get the change of weight which is negligible. All right so there might be certain examples where you know you are trying to predict say a next word in a sentence and the sentence is pretty long. For example, if I say I went to France dash then there are certain words then I say few of them speak dash now I need to predict speak what will come after speak so for that I need to go back in time and check what was the context which will be very complex and due to that there will be a lot of iterations and because of that this error this change in weight will become very small so the new weight that we'll get will be actually almost equal to your old weight. So there won't be any updation of weight that will be happening and that is nothing but your vanishing gradient. All right, I'll repeat it once more. So what happens in back propagation? You first calculate the error. This error is nothing but the difference between the actual output and the model output and the square of that. With that error, we figure out what will be the change in error when we change a particular variable say weight. So DE by DW multiply it with learning rate to get the change in the variable or change in the weight. Now we'll add that change in the weight to our old weight to get the new weight. This is what back propagation is guys. All right, I've just given you a small introduction to back propagation. Now consider a scenario where you need to predict the next word in a sentence and your sentence is something like this. I have been to France. Then there are a lot of words after that few people speak and then you need to predict what comes after speak. Now if I need to do that I need to go back and understand the context. What is it talking about? And that is nothing but your long-term dependencies. So what happens during long-term dependencies? If this de by dw becomes very small then when you multiply it with n which is again smaller than one you get delta w which will be very small. That will be negligible. So the new weight that you'll get here will be almost equal to your old weight. So I hope you're getting my point. So this new weight so there will be no updation of weights guys. This new weight will definitely be will always be almost equal to our old weight. So there won't be any learning here. So that is nothing but your vanishing gradient problem. Similarly, when I talk about exploding gradient, it is just the opposite of vanishing gradient. So what happens when your gradient or de by dw becomes very large becomes greater than one. All right? And you have some long-term dependencies. So at that time your d by dw will keep on increasing. Deltaw will become large and because of that your weights the new weight that will come will be very different from your old weight. So these two are the problems with back propagation. Now let us see how to solve these problems. Now exploding gradients can be solved with the help of truncated BTD back propagation through time. So instead of starting back propagation at the last time stamp, we can choose a smaller time stamp like 10 or we can clip the gradients at a threshold. So there can be a threshold value where we can you know clip the gradients and we can adjust the learning rate as well. Now for vanishing gradient we can use a relu activation function. Similarly, we can also use LSTM and GRUs. Now, let us understand what exactly are LSTMs. So, guys, we saw what are the two limitations with the recurrent neural networks. Now, we'll understand how we can solve that with the help of LSTMs. Now, what are LSTMs? Long short-term memory networks, usually called as LSTMs, are nothing but a special kind of recurrent neural network. And these recurrent neural networks are capable of learning long-term dependencies. Now, what are long-term dependencies? I've discussed that in the previous slide but I'll just explain it to you here as well. Now what happens sometimes you only need to look at the recent information to perform the present task. Now let me give you an example. Consider a language model trying to predict the next word based on the previous ones. If we are trying to predict the last word in the sentence say the clouds are in the sky. So we don't need any further context. It's pretty obvious that the next word is going to be sky. Now in such cases where the gap between the relevant information and the place that it's needed is small RNN can learn to use the past information and at that time there would be such problems like vanishing and exploring gradient. But there are few cases where we need more context. Consider trying to predict the last word in the text I grew up in France. Then there are some words after that comes I speak fluent French. Now recent information suggests that the next word is probably the name of a language. But if you want to narrow down which language we need the context of France from further back and it's entirely possible for the gap between the relevant information and the point where it is needed to become very large and this is nothing but long-term dependencies and the LSTMs are capable of handling such long-term dependencies. Now LSTMs also have a chain-like structure like recurrent neural networks. Now all the recurrent neural networks have the form of a chain of repeating modules of neural networks. Now in standard RNN the repeating module will have a very simple structure such as a single tan layer that you can see. Now this tan layer is nothing but a squashing function. Now what I mean by squashing function is to convert my values between minus1 and one. All right that's why we use tanage and this is an example of an RNN. Now we'll understand what exactly are LSTMs. Now this is a structure of an LSTM. If you notice LSTM also have a chain-like structure but the repeating module has different structures. Instead of having a single neural network layer, there are four interacting in a very special way. Now the key to LSTM is the cell state. Now this particular line that I'm highlighting, this is what is called the cell state. The horizontal line running through the top of the diagram. So this is nothing but your cell state. Now you can consider the cell state as a kind of a conveyor belt. It runs straight down the entire chain with only some minor linear interactions. Now what I'll do, I'll give you a walkthrough of LSTM step by step. All right. So we'll start with the first step. All right guys, so the first step in our LSTM is to decide what information we are going to throw away from the cell state. And you know what is a cell state, right? I've discussed in the previous slide. Now this decision is made by the sigmoid layer. So the layer that I'm highlighting with my cursor, it is the sigmoid layer called the forget gate layer. It looks at ht minus one that is the information from the previous time stamp and xt which is a new input and outputs a number between zeros and one for each number in the cell state ct minus one which is coming from the previous time stamp. Uh one represents completely keep this while at zero represents completely get rid of this. Now if you go back to our example of a language model trying to predict the next word based on all the previous ones. In such a problem, the cell state might include the gender of the present subject so that the correct pronouns can be used. When we see a new subject, we want to forget the gender of the old subject, right? We want to use the gender of the new subject. So, we'll forget the gender of the previous subject here. This is just an example to explain you what is happening here. Uh now, let me explain you the equations which I've written here. So, f_t will be combining with the cell state later on that I'll tell you. So currently f_t will be nothing but the weight matrix multiplied by ht minus one and xt and plus a bias and this equation is passed through a sigmoid layer. All right and we get an output that is 0 and one. Zero means completely uh get rid of this and one means completely keep this. All right. So this is what basically is happening in the first step. Now let us see what happens in the next step. So the next step is to decide what information we are going to store. In the previous step, we decided what information we are going to keep. But here we are going to decide what information we are going to store here. All right, what new information we going to store in the cell state. Now, this has two parts. First, a sigmoid layer. This is called a sigmoid layer and which is also known as an input gate layer. Decide which values we'll update. All right, so what values we need to update? Then there's also a tan layer that creates a vector of the candidate values C bar of T minus one that will be added to the state later on. All right. So let me explain it to you in a simpler terms. So whatever input that we are getting from the previous time stamp and the new input, it will be passed through an sigmoid function which will give us I oft will be multiplied by CT which is nothing but the input coming from the previous time stamp and the new input with that is passed through a tan that will result in CT and this will be later added onto our sales state and then next step we'll combine these two to update the states. Now let me explain the equations. So I oft will be what? Weight matrix. And then we have HD minus 1 comma XT multiplied by the weight matrix plus the bias pass it through a sigmoid function we get I oft C bar of T we'll get by passing a weight matrix HD minus1 XT plus bias through a tan squashing function and we'll get C bar of T. All right. So as I've told you earlier is where the next step we'll combine these two to update the state. Let us see how we do that. So now is the time to update the old cell state CT minus one with the new cell state CT. All right. And the previous steps we have already decided what to do. We just need to actually do it. So what we'll do? We'll multiply the old cell state CT minus one with FT that we got in the first step. Forgetting the things that we decided to forget earlier in the first step if you can recall. Then what we do? We add it to it and CT. Then we add it by the term that will come after multiplication of it and C bar. And this new candidate value scaled by how much we decided to update each state value. All right. So in the case of the language model that we were discussing, this is where we would actually drop the information about the old subject gender and add the new information as we decided in the previous steps. So I hope you are able to follow me guys. All right. So let us move forward and we'll see what is the next step. Now our last step is to decide what we are going to output and this output will depend on a cell state but it'll be a filtered version. Now finally what we need to do is we need to decide what we are going to output and this output will be based on a cell state. First we need to pass HD minus1 and X3 through a sigmoid activation function so that we get a output that is OT. All right. And this OT will be in turn multiplied by the cell state after passing it through a tan squashing function or an activation function. And why we do that? Just to push the values between minus1 and one. So after multiplying OT that is this value and a tan CT we'll get the output H2 which will be our new output and that will only output the part that we decided to whatever we have decided in the previous steps it will only output that value. All right. Now, I'll take the example of the language model again. Since it just saw a subject, it might want to output information relevant to a verb. And in case that's what is coming next, for example, it might output whether the subject is singular or plural so that we know what form of a verb should be conjugated into. All right? And you can see from the you can see the equations as well. Again, we have a sigmoid function. then that uh whatever output we get from there we multiply it with tanage CT to get the new output. All right guys, so this is basically LSTMs in a nutshell. So in the first step we decided what we need to forget. In the next step we decided what are we going to add to our cell state? What new information we are going to add to our cell state and we were taking an example of the gender throughout this whole process. All right. And in the third step what we do we actually combined it to get the new cell state. Now in the fourth step what we did we finally got the output that we want and how we did that just by passing HD minus one and XT through a sigmoid function multiplying it with the tan CT the tan new cell state and we get the new output fine guys so this is what basically LSTM is guys now we'll look at a use case where we'll be using LSTM to predict the next word in a sentence all right let me show you how we are going to do that so this is what we are trying to do in our use case guys we'll feed LSTM with correct sequences from the text of three symbols. For example, had a general and a label that is council in this particular example. Eventually, our network will learn to predict the next symbol correctly. So, obviously, we need to train it on something. Let us see what we are going to train it on. So, we'll be training LSTM to predict the next word using a sample short story that you can see over here. All right. So, it has basically 112 unique symbols. So, even comma and full stop are considered as symbols. All right. So this is what we are going to train it on. So technically we know that LSTMs can only understand real numbers. All right. So what we need to do is we need to convert these unique symbols into a unique integer value based on the frequency of occurrence. And like that we'll create a dictionary. For example, we have had here that will have value 20. A will have value six. General will have value 33. All right. And then what happens? our LSTM will create a 112 element vector that will contain the probability of each of these words or each of these unique integer values. All right. So since 6 has the highest probability in this particular vector, it'll pick the index value of 6. Then it will see what symbol is attached to that particular integer value. So 37 is attached to council. So this will be our prediction which is absolutely correct as a label is also council according to our training data. All right. So this is what we are going to do in our use case. So guys, this is what we'll be doing in our today's use case. Now I'll quickly open my PyCharm and I'll show you how you can implement it using Python. We'll be using TensorFlow which is a popular Python library for implementing deep neural networks or neural networks in general. All right. So I'll quickly open my PyCharm now. So guys, this is my PyCharm and I over here I've already written the code in order to execute the use case that we have. So first we need to do is import the libraries. numpy for arrays tensorflow we know tensorflow contract from that we need to import rnn and random collections in time all right so this particular block of code is used to evaluate the time taken for the training after that we have log path and this log path is basically telling us a path where the graph will be stored all right so there'll be a graph that will be created and then that graph will be launched then only our ronn model will be executed then that's how tensorflow work guys so that graph will be created in this particular path All right. And we are using summary writer. So that will actually create the log file that will be used in order to display the graph using tensorboard. All right. So then we have defined a training file which will have our story on which we'll train our model on. Then what we need to do is read this file. So how are we going to do that? First is uh read line by line whatever content that we have in our file. Then we are going to strip it. That means we are going to uh remove the first and the last whites space. Then again we are splitting it just to remove all the white spaces that are there. After that we're creating an array and then we are reshaping it. Now in during the reshape if you notice this minus one value tells us the compatibility. All right. So when you're reshaping it you need to make sure that uh you know we are providing it the correct parameters to reshape it. So you can convert a 3 +2 matrix to a 2 +3 matrix like right. So just to make sure that it is compatible enough we add this minus one and it'll be done automatically. All right. Then return content. After that what we are doing, we are feeding in the training data that we have training file. We are feeding in our story and calling the function readers data. Then what we are doing? We are creating a dictionary. What is a dictionary? We all know key value pairs based on the frequency of occurrences of each symbol. All right. So from here collections counterwords domost common. So most common words with their frequency of occurrence there will be a dictionary created. And after that we'll call this dict function. And this dict function will feed in word and which is equal to length of dictionary. That means whatever the length of that particular dictionary is how many time it is repeated. So we'll have the frequency as well as a symbol that will be our key value pair and we are reversing it as well. Then what we are doing we are calling it build data set and we're feeding in our training data there. This is our vocabulary size which is nothing but the length of your dictionary. Then we have defined various parameters such as learning rate, iterations or epochs. Then we have display step n input. Now learning rate we all know what it is. Uh the steps in which our variables are updated. Training iterations is nothing but your epoch the total number of iterations. So we have given 50,000 iterations here. Then we have display step that is th00and which is basically your batch size. So batch size is what after every thousand epox you'll see the output. All right. So it'll be processing it in batches of thousand iterations. Then we have n_input as three. Now the number of uh units in the RNN cell we'll keep it as 512. Then we need to define X and Y. So X will be our placeholder that will have the input values and Y will have all the labels. All right, vocab size. So X is a placeholder where we'll be feeding in our input dictionary. Similarly, Y is also one more placeholder and it'll have a shape of none, vocab size. Vocap size we have defined earlier as you can see which is nothing but the length of your dictionary. Then we are defining weights as well as biases. After that we have defined our model. All right. So this is how we are going to define it. We'll create a function RNN when we'll have X weights and biases. And after that we are calling in RNN. RNN cell function. And this is basically to create a two-layer LSTM. And each layer has N hidden units. After that what we are doing we're generating the predictions. But once we have generated the prediction there are n_input outputs but we only want the last output. All right. So for that we have written this particular line and then finally we are making a prediction. We are calling this rnn function feeding in x weights and biases. After that we are calculating the loss as and then we are optimizing it. For calculating the ROSS we are using reduce mean softmax cross entropy and this will give us basically the probability of each symbol and then we are optimizing it using RMS prop optimizer. All right and this gives actually a better accuracy than atom optimizer and that's the reason why we are using it. Then we are going to calculate the accuracy and after that we are going to initialize the variables that we have used as we have seen in tensorflow that we need to initialize all the variables unlike constants and placeholders in tensorflow. All right. And once we are done with that, we are feeding in our values. Then calculating the accuracy, how accurate it is. And then when optimization is done, we are calculating the elapse time as well. So that will give us how much time it took in order to train our model. Then this is just to run the tensorboard on local host 6006. And uh yeah, and this particular block of code is used in order to handle the exceptions. So exceptions can be like whatever word that we are putting in might not be there in our dictionary or training data. So those exceptions will be handled here and if it is not there in our dictionary then it'll print word not in a dictionary. All right. So fine guys let's uh input some values and we'll have some fun with this model. All right. So the first uh thing that I'm going to feed in is had a general. So whenever I feed in these three values had a general there'll be a story that will be generated by feeding back the predicted output as the next symbol in the inputs. All right. So when I feed in had a general so it'll predict the correct output as council and this council will be fed back as a part of the new input and our new input will be a general council. So it'll All right. So these three words will become our new input to predict the new output which is two. All right and so on. So surprisingly LSTM actually creates a story that you know somehow makes sense. So let's just read it. Had a general counselor to consider what measures they could take to outwit their common enemy. The cat by this means we should always know when she was about and could easily all right so somehow it actually makes sense when you feed in that. So what will happen these three inputs? It'll predict the next word that is council. After that it'll take council and it'll feed back as an input along with a general. So a general council will be your next input to predict two. Similarly in the next iteration it will take general council 2 and predict consider for us and this will keep on repeating. What is tensorflow? TensorFlow is a popular open-source framework developed by Google for building and training machine learning models. It's like a toolkit for creating artificial intelligence systems. TensorFlow can be used for various tasks including neural networks, computer vision where computers learn to see and understand images and natural language processing where computers can understand and use human language. So now we got an understanding of TensorFlow. So let's get deeper into it. TensorFlow is a versatile machine learning framework that utilizes tensors, multi-dimensional arrays and computational graphs to perform operations. This architecture make it adaptable and scalable for various machine learning task. TensorFlow caters to users with different levels of expertise by offering both highle APIs like keras for simplified model building and low-level APIs for greater customization. Furthermore, its capabilities with CPUs, GPUs, and TPUs ensures its suitability for both small scale research and large scale production environments. Now that you are familiar with TensorFlow, so let us see its significance in AI and machine learning. TensorFlow is a powerful platform that empowers developers to transform AI and ML ideas into scalable solutions. Seamlessly transitioning from research prototypes to real world applications. It is adaptable to diverse needs with features like visualizations and debugging tools that enhance model understanding and troubleshooting. TensorFlow streamlines the entire machine learning pipelines enabling efficient handling of large data sets and complex task while ensuring scalability and performance. Its versatility allows it to be customized for various applications from simple experiments to advanced AI systems. Ultimately, TensorFlow has a real world impact by enabling developers to create innovative solutions that address global challenges and drive technological progress. Now that you know the significance of TensorFlow, so let's discover why to use TensorFlow. TensorFlow is one of the leading deep learning frameworks widely used for machine learning and AI research and production. Its scalability allows TensorFlow to handle massive data sets and complex models efficiently making it ideal for large scale AI systems in applications like image recognition and natural language processing. Next, flexibility. Flexibility is another key strength as TensorFlow offers APIs ranging from highle kas for simplicity to low-level APIs for advanced customization catering to diverse developers needs. Next, its rich ecosystem includes an active community, extensive documentation, pre-trained models, and numerous resources that simplify its adaption and usage. TensorFlow's cross-platform support enables seamless deployment across different operating systems and hardware platforms. Finally, its optimized performance as TensorFlow runs efficiently on CPUs, GPUs, and TPUs

### [5:01:57](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=18117s) Prompt Engineering

ensuring faster training and inferences times. These trends such as scalability, flexibility, ecosystem, crossplatform compatibility, and optimized performance make TensorFlow a preferred choice for AI and ML projects. Now that we know what is TensorFlow. So let's compare TensorFlow with other frameworks. So here is the comparison between TensorFlow and another deep learning framework likely PyTorch highlighting their respective strengths. TensorFlow is known for its flexibility and versatility enabling developers to build a wide range of models with customizable implementations. While PyTorch is recognized as initiative and Pythonic, offering a user-friendly approach that appeals to many developers. Next, when it comes to production readiness, TensorFlow excels with robust tools for deploying models in real world environments, whereas PyTorch is more research focused, favored for its dynamic computational graphs and ease of experimentations. Next, TensorFlow boast a large and established ecosystems with an active community, extensive documentation, and a proven track record. While PyTorch is growing rapidly in academic circles, gaining traction as in popular choice for research projects. In terms of performance, TensorFlow is optimized and scalable, capable of handling large scale models and data sets efficiently. While PyTorch is seen as a competitive alternative, offering its own strengths and appealing to a different subset of developers. Ultimately, the choice between TensorFlow and PyTorch depends on the specific use case and developer preferences as both frameworks offer compelling features suited to different needs. Next, let's explore the real world applications of TensorFlow. TensorFlow in computer vision showcasing its versatility and impact across various fields. TensorFlow is widely used for identifying objects in images. With Ultimately, the choice between algorithms like YOLO, you only look once and SSD singleshot detector enabling tasks such as detecting pedestrians and obstacles in self-driving cars or identifying suspicious object in security systems. It plays a crucial role in medical imaging and satellite imagery where it aids in analyzing X-rays or MRIs to detect anomalies and assist in monitoring deforestations, identifying land use patterns and predicting natural disasters. Additionally, TensorFlow powers security systems and user authentications, enabling facial recognitions for tasks like facial detection and identifications. This application highlights TensorFlow's ability to transform industries like healthcare and security making it an indispensable tools in computer vision. Next, TensorFlow in natural language processing. TensorFlow is instrumental in task like spam detections and sentiment analysis where it helps identify spam emails and determine the emotional tone or polarity which is positive, negative or neutral of text such as customer reviews or social media post. It provides services like Google translate enabling accurate translations between numerous languages and facilitating global communications. TensorFlow also enhance security systems and user authentication by analyzing text data to detect suspicious patterns, fraudulent activities or by improving authentication process through text based inputs like passwords and security questions. These applications demonstrate TensorFlow's versatility in advancing communication, enhancing security and driving innovations in NLP. TensorFlow in the field of generative AI showcasing its role in driving innovation. TensorFlow powers GANs as similar models enabling the creation of realistic images, art and even the manipulation of existing visuals. It is instrumental in training large language models like GPT which stands for generative pre-trained transformers, translate languages, write creative content and perform summarization task. Additionally, Tensaflow facilitates the creation of deep fake audio and speech generation, producing synthetic media where a person's likeliness or voice can be convincingly replicated. This applications demonstrate TensorFlow's pivotal role in advancing image creation, text generation and audio synthesis within the field of generative AI. Now, let us explore the industrial use of TensorFlow. There are diverse applications of TensorFlow across various industries showcasing its transformative impact in healthcare. TensorFlow is used for predicting analytics to forecast disease outbreaks, identify high-risk patients and optimize treatment plans as well as for medical image analysis to detect anomalies in X-rays, MRIs, and CT scans. In autonomous vehicles, TensorFlow powers object detection systems, enabling safe navigation and supports decision making models in industries like supply chain management and risk assessment. In finance, TensorFlow is utilized for algorithmic trading, analyzing market trends and detecting flaw transactions. Retail applications include inventory management to predict demand and reduce stockouts along with personalized recommendations to enhance customer experience and boost sales. In entertainment, TensorFlow facilitates content creation such as generating music or art and it is used in video and audio processing task like noise reduction and voice stabilization. Overall, TensorFlow's versatility and advanced capabilities are driving innovation across these industries. Now, let us move ahead and install TensorFlow. To get started with TensorFlow, you first need to install the necessary prerequisites, including Python 3. 5 or a higher version. You can use a package manager like PIP or cond for installation. Here's how you can proceed based on your operating system. First, open your terminal or command prompt and run the following command to create a virtual environment. And the command is python - m virtual environment my environment. Next, activate the environment. For Linux or Mac OS, activate the environment with the following command. And the command is source my environment/bin/activate. For Windows, activate the environment by running my environment backward/scripts backward slactivate. Once the environment is activated, you can install TensorFlow by running pip install TensorFlow. Now to ensure that TensorFlow has been installed successfully, you can verify the installation by running the following command and that is Python C. Inside the double quotes type import TensorFlow as TF print inside the bracket type TF dot double version double and this will display the installed version of TensorFlow confirming that the installation was successful. Once the environment is activated, you can install TensorFlow by running this command pip install TensorFlow. Now let us open our VS code terminal and install the TensorFlow. So let us type the command pip install TensorFlow. As you can see TensorFlow is installed. Now to ensure that TensorFlow has been installed successfully, you can verify the installation by running the following command. So now let us run this command in our terminal. Now as you can see on the screen the version is displayed. So this will display the installed version of TensorFlow confirming that the installation was successful. Now let us discuss TensorFlow ecosystem. The TensorFlow ecosystem provides a comprehensive set of tools for building, training and deploying machine learning models. At its core is TensorFlow the foundation of the ecosystem. TensorFlow light enable running models on mobile and embedded devices while TensorFlow extended supports building production grade ML pipelines including data validation and model serving. Next, the TensorFlow model garden offers pre-trained models and examples for tasks like image classification and NLP. TensorFlow. js allows running ML models in web browsers and TensorFlow Hub provides a library of pre-trained models for easy integration into projects. Now, let us take a look at the key capabilities of TensorFlow where we will be showcasing its strengths as an open-source communitydriven framework that evolves through contributions. At its core, TensorFlow utilizes tensors, multi-dimensional arrays for efficient data representation and manipulations. Its flexible architecture allows developers to choose between static graphs for optimized performance and eager execution for an interactive development experience. TensorFlow supports a wide range of applications including natural language processing, generative AI, computer vision, and more making it highly versatile for various machine learning tasks. Furthermore, it offers cross-platform compatibility running efficiently on CPUs, GPUs, and TPUs, enabling developers to leverage the best hardware for their needs. So, overall, TensorFlow stands out as an robust, adaptable, and versatile framework for machine learning. Now, let us head towards hands-on. There are three main steps in building a churn prediction model using TensorFlow. In step one, a model is created by defining its architecture including layers and parameters tailored to the specific problem of prediction customer churn. In step two, the model is trained using historical data where it learns patterns and relationships that help predict customer behavior. Finally, in step three, the train model is used to make predictions identifying customers likely to churn based on input data. This process demonstrate how TensorFlow can be effectively utilized for churn production in machine learning projects. Now without any further delay let's code it. So first let us import the libraries. So let us import tensorflow as TF. Next import mattplot lib dot piplot as plt. Next from tensorflow. kas Keras import sequential. Next from TensorFlow dot keras dot layers import flatten, dense. So over here we are importing the tensorflow library a popular framework for machine learning and deep learning. Next import mattplots plotting model for data visualization. Then we are importing the sequential class from TensorFlow's KA API which is used to build models layer by layer. Next we are importing the following KAS layers which is flatten and dense. Flattens multi-dimensional data into a 1D array for input into dense layers. And then dense creates fully connected dense layers for the neural network. And this imports set up the tools needed to build and train a neural network and then visualize data and results. Now to get the data and split it for training and testing let us write the command. So inside the bracket let us type train images, train labels and give comma and again inside the bracket let us type test images, test labels and equal to tf dot keras dot data sets dot mnest dot load data. So this code loads the mnest data set of handwriting digits using tensorflow. Here the train images and train labels were images and labels for training 60,000 samples and the test images and test labels are where the images and labels for testing 10,000 samples. Here the age image is a 28x 28 grayscale image and the labels represent the digit 0 to 9. Now let us scale down the values of the pixels from 0 to 255 to 0 to 1. So for that let us type train images equal to train images by 255. 0. Next test images equal to test images by 255. 0. Now next let us visualize the data for that print and inside the bracket let us type train images dotshape. Next print test images dotshape and finally print train labels. So in this code the print train images dotshape displays the shape of the training images data set. So let us run the code to check the output. So as you can see the output here it says 60,000 28 28. This means there are 60,000 images each of size 28x 28 pixels. Next we have the code print test images dotshape which displays the shape of the testing images data set. So the output here is 10,000 28 which means there are 10,000 images each of size 28x 28 pixels. The next piece of code is train labels which prints the labels digit 0 to 9 for the training set. Now let us display the first image. So for that let us type plt dot image show and inside the bracket let us type train images at index zero comma cmap equal to gray inside the single code to see the graph let us type plt dot show. So over here the plt. m show uh train images this displays the first image in the data set which says train images of index zero then cm map equal to gray ensures that the image is shown in gray scale. So let us run this code. So as you can see the screen here is the output. Now let us move ahead. The next step is to define the neural network model. So let us type my model equal to tf. karas dot models dot sequential function. Next type my model. Okay. Next is my model dot add flatten and inside the bracket let us keep input shape equal to 28 comma 28. Okay. Next let us type my model dot add and inside the bracket let us type dense and open bracket and write 128 comma activation equal to inside the single quote let us type relu. Next type my model dot add and inside the bracket let us type dense. again open the bracket and let us give 10 comma activation equal to soft max. So over here the tf. karas domodel sequential which creates a sequential model where layers are added one after another. And next we have this code my model add flatten input shape equal to 28 which flattens the 28x28 input images into 1D array of 784 values and making them suitable for the dense layers. And the next quote my model dot add dense 128 comma activation equal to relu which adds a dense fully connected layers with 128 neurons. And then the activation equal to relu which applies the relu activation function to introduce nonlinearity. And the next code which is my model do add dense 10 comma activation equal to softmax which adds the output layer with 10 neurons corresponding to the 10digit classes which is 0 to 9 and activation equal to soft max outputs probabilities for each class. Now let us compile the model. The network will be actually created. So let us type my model dot compile and inside the bracket let us give optimizer equal to inside the single quotes let us give adam loss equal to inside the single code let us keep it as pass categorical cross entrophy comma matrix equal to accuracy Over here the optimizer equal to Adam uses the Adam optimizer an efficient and widely used algorithm for optimizing neural networks. This specifies the loss function for multiclass classification where labels are integers. Next the matrix equal to accuracy tracks the model's accuracy during training and evaluation. Next let us train the model. So for that let us type my model dot fit inside the bracket type train images comma train labels comma epoch equal to three. So here the my model. fit fit. This is a function where it trains a model with the provided data. And then we have the train images, train labels where the input data train images and corresponding labels which is train labels for training. Next is the epoch equal to three. The model will go through the entire training data set three times to learn patterns. Next, let us check the model accuracy on the test data. For that let us type value loss comma value_accc equal to my model dot evaluate and inside the bracket let us type test images test labels. Next let us give a print statement. So type print test accuracy of my model, value ACC. So here in this code, my model do evaluate evaluates the model using the test images and their corresponding labels which returns two values which is value loss the loss on the test set and

### [5:21:21](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=19281s) Prompt Engineering For Code Generation

the value ACC the accuracy on the test set. And next statement we have print. So this will display the test accuracy showing how well the model performs on unseen data. Now let us check the output. So let's run the code. So as you can see the screen the model is finding patterns in three iteration because we had set the epoch to be three. And here you can see the accuracy is 0. 97%. So let us change the epoch value to be 50. So initially we had set three. So now let us give 50 and run the code. Now as you can see it's making 50 iterations. However, achieving perfect accuracy on the training set could also indicate potential overfitting. Where the model might not generalize well to new unseen data. So to confirm the model's effectiveness, it is essential to evaluate its performance on a separate validation or test data set. And if the accuracy remains high and the loss is minimal on the test set, the model can be considered robust and effective. Artificial intelligence and human language understanding. In this field, professionals and researchers work to create prompts or instructions that effectively guide AI systems to produce the expected outcome. Whether it's fine-tuning language model, designing prompts for specific tasks, or optimizing human machine communication, prompt engineering is crucial for leveraging the power of AI for a variety of applications. Imagine you're developing a virtual assistant application using a large language model such as GPD3. The goal is to provide users with an engaging and helpful experience by designing effective prompts that generate informative and relevant responses from the model. So let's consider a scenario in which the virtual assistant assist users with the travel planning. So here's how prompt engineering plays a major part. So the scenario is you're planning a trip to Paris and want the virtual assistant to provide recommendations for activities, restaurants and landmarks to visit during your stay. So let's say you're looking for a help with a traditional prompt and you ask for what should I do in Paris and virtual assistant will assist you here like here are some recommendations for activities in Paris and here's how the enhanced prompt through prompt engineering respond to your queries. So if you input a query that goes like hey there I'm super excited about my upcoming trip to Paris. So could you please recommend some must visit places and activities for me then the virtual assistant will generate the response as something like this. So of course Paris is an amazing city with so much to offer. So here are some must visit places and activities and it continues with the explanation about each place. I hope you got the idea of how enhanced prompt provides users with an engaging and helpful experience by generate informative and relevant responses from the model. So now let us understand what exactly is prompt engineering. Prompt engineering is a method used in natural language processing that is NLP and machine learning. It's all about crafting clear and precise instruction to interact with large language model like GPT3 or B. So this models can generate humanlike responses based on the prompts they receive. Think of prompt engineering as giving direction to these models. By crafting specific and concise prompts, we guide them to produce the response we want. So to do this effectively, we need to understand the capabilities of the model and the problem we are trying to solve. Finetuning prompts allows researchers and developers to improve the performance and usability of LLMS for a variety of applications including text generation. question answering, language translations and others. Effective prompts engineering necessitates a thorough understanding of the underlying models capabilities as well as the problem domain and desired result. Now let's find out why prompt engineering matters for AI. So prompt engineering is important in AI because it improves model performance, customization, and reliability. By creating clear and tailored prompts, developers can help AI models produce more accurate and relevant result, reduce biases, improve user experience and address ethical concerns. In simple terms, prompt engineering ensures that AI system produce useful and reliable result that meets the needs of users while adhering to ethical principles. So now let's consider an example in the context of text generation for generating product description. Assume you're using an AI model to create product description for an online store. So without prompt engineering, you may issue a generic prompt such as generate a product description for a smartphone. So without prompt engineering, you would get something like this. This smartphone has a high resolution display, powerful processor, and a longlasting battery life. The given prompt is less effective because it lacks specificity. So it simply says, generate a product description for a smartphone. So this may make it difficult to come up with an idea and write something engaging and informative. So having a good prompt can make a significant difference in your writing. They give you a clear idea of what you need to write about and keep you focused and organized making it easier to generate ideas and express yourself. On the other hand, by using prompt engineering techniques, you can provide more specific instructions or constraints that will tailor the generated descriptions to the target audience or brand style. So with prompt engineering, if you input a query such as create a product descriptions for a budget friendly smartphone perfect for the young professionals highlights, it's affordable, sleek and packed with a top-notch camera features and the generated response would be something like this. Introducing our sleek and affordable smartphone design for young professionals with its stylish design and advanced camera features capturing life's moments and have never been easier and it goes on giving its key features along with it. So through this example we understood that prompt engineering enables the creation of a product description that is useful to the target audience and highlight specific features based on the instructions provided. So this shows how prompt engineering can improve the importance and effectiveness of AI generated content for specific applications. To help AI models give accurate answers, it's important to create clear prompts. So here are some simple rules for generating effective prompts. First, make it clear. So clearly explain what you want the AI to do. Unclear prompts might confuse the AI and lead to wrong answers. So make sure that the prompts is clear. For example, the unclear prompts is something like write about cars. So where we have not mentioned which type of car or anything much in details whereas the clear prompt is write a description of a red convertible sports car. Next give context. So provide enough information so that AIS understands the task. So this helps it give accurate response that makes sense in the given situation. So for example, prompt without context is write a story. Prompt with story about a girl who discovers a magic book in her attic. Next, show examples. Use examples to show the AI what you are looking for. So this helps it understand the type of response you want. So for example, the prompt without example is describe a beat scene. So prompts with examples scene with the palm trees, crashing waves, and the people playing volleyball. Next is keep it short. So don't overload the AI with too much information. Short prompts help the AI focus and give quicker, more accurate responses. For example, long prompts are like this. Write a detailed essay discussing the impact of climate change on biodiversity and ecosystems in tropical rainforest. And short prompts look something like this. Write about climate change effects on rainforest. Next, avoid biases. So, make sure your prompts are fair and don't include any unfair assumptions. So, biased prompts can lead to biased answers which isn't helpful. So, for example, write about a woman who struggles with her weight. So unbiased prompts are right about a person overcoming challenges. Next, set limits. So tell the AI any rules or restrictions it needs to follow. This helps guide its response and ensures they meet your specific needs. For example, prompt without limits are write a story and prompt with set in a haunted house with a maximum word count of 500 words. And I hope it's very clear. Next, moving on to some example of prompts for generating

### [5:30:40](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=19840s) Building a Chatbot with Prompt Engineering

text using chat GPT. For text generation task, prompts usually consists of a textual instructions or starting point that directs the model to produce coherent and relevant text. Prompts can be story prompts, questions, or incomplete sentences. Text generation prompts provide context and directions to the model, allowing it to generate humanlike text responses. They influence the generated text tone, style, and context. So let's say the prompt is write a short story about a character who discovers a hidden treasure. So by providing a specific story line and theme in the prompt, the model is guided to generate a coherent and engaging narrative centered around the discovery of a hidden treasure. So the picture illustrate how Chad crafts stories with the engaging touch making them more captivating and interesting for readers. Next question answering. So prompt is can you describe the common signs and symptoms of COVID 19 along with any precautions that can be taken to stay safe and just like that it can generate answers to all your questions in mere seconds. So by framing the prompt as a question the model is directed to provide a concise answer regarding the symptoms of COVID 19 ensuring relevant and informative responses. Next, language translation. Translate the given English sentence. The quick grown fox jumps over the lazy dog into Spanish while maintaining its original meaning. So, by specifying the source and target language in the prompt along with the input sentence, the model is instructed to perform a precise translation task ensuring accurate language conversion. Next, code autocomp completion using OpenAI Codeex or Chad. You can perform code auto completion task. So here we go with TPD. Code generation prompts are usually partial code snippets or descriptions of programming task. They specify the desired functionality or behavior that the model should show. Code generation prompts allow the model to generate code that satisfy specific programming requirements such as implementing algorithms, defining functions, or solving coding problems. So the prompt is complete the following Python function to calculate the factorial of a number and here you have also added the function. So by presenting an incomplete code snippet along with clear instructions the model is directed to suggest appropriate code completion helping developers write code more efficiently. Now moving on to the text to image generation. Image generation prompts specify the visual sense, objects or concept that the model should generate. They may include textual descriptions, keywords or images. So image generation prompts allow the model on what visual content to generate. They influence the generated images, composition, style and detail. For example, the prompt is imagine a tree where the branches are made of stacks of books. So can you paint me a picture of that? And for the given prompt, we got the image generated as something like this. An imaginative portrayal of a tree with branches composed of stat books. Eight book representing a leaf and covers visible. And the next prompt is picture a cloud in the sky that looks like a huge heart. Can you draw that for me? And here we go. These AI tools leverage prompt engineering techniques to generate text, perform language translation, code auto completion, and text to image generation, demonstrating the versatility and power of prompt based on interactions with AI models. Next, why is machine learning useful in prompt engineering? Machine learning is very helpful in prompt engineering especially in linguistic and language models because it helps create better prompts and interactions by analyzing lots of data and finding patterns. So first understanding language patterns. Machine learning algorithms can analyze large amounts of text to understand linguistic patterns like grammar, syntax, semantics and context. So this understanding is critical for developing effective prompts that generate desired responses from language models. Next, generating relevant prompts. Machine learning models can suggest or generate prompts based on input data and user preferences. These prompts can be tailored to specific task, domains, or user requirements, making them more useful and efficient for guiding language models. Next, optimizing prompts design. Machine learning techniques can be used to optimize prompt design by comparing the performance of various prompts and selecting the one that produce the best result. This iterative process improves prompt engineering practices and the overall performance of language models. And the next is personalizing interactions. Machine learning enables personalized interaction by creating prompts to individuals users preferences, history and context. This personalization increase user engagement and satisfaction with the language model interaction. Next, improving model performance. Machine learning algorithms can be used to fine-tune language models based on prompt response pass increasing their performance and accuracy over time. Language model can be trained on a variety of data set and prompts to produce more relevant and contextually appropriate responses. And next, mitigating bias and misinformation. Machine learning techniques can help identify and mitigate preferences in prompt engineering by examining prompt responses pairs for potential biases or inaccuracies. Language models can produce more fair, inclusive, and reliable results by detecting and correcting for preferences. And I hope it is clear why machine learning is useful in prompt engineering. Imagine you have a brilliant friend named Max who has read every book, article and social media post on the internet. Max can recall entire conversation, understand details of language and generate responses that are both informative and engaging. So one day you request Max to help you write an email to a colleague. Max suggests different words, fixes any mistake in spelling or grammar, and even make the email more engaging to read. Next, you get help to summarize a lengthy report on a technical topic. Max simplifes the main points into a clear and concise summary, saving you hours of reading time. And later, you engage in a conversation with Max about a complex topic like artificial intelligence. Max responses thoughtfully using its vast knowledge and understanding of language to provide insightful answers and ask follow-up questions that stimulate further discussion. So this is what large language model like Max can do. Process and generate humanlike language, understand context and meaning and assist with various task. So on that note, hello everyone and welcome to this video on what is large language model by Edureka. And in this video we will discuss some of the examples of large language models and their applications followed by the working of large language model and moving on to some of the examples of large language model such as Google's birth metas lava open AI's GP3 and Microsoft turning NLG and these models have many applications such as language translations for accurately translating text between different languages and then text summarization for understanding the meaning and context of text for task like sentiment analysis, question answering and summarization. And then we have chatboards and virtual assistants followed by content generation for creating humanlike text for content creation, chat boards and storytelling. And then we have conversational AI and along with this it is also used for content recommendations, search engine, sentiment analysis and data analysis. Large language models work by using a combination of natural language processing that is NLP and machine learning algorithms to process and generate humanlike language. So now moving ahead let's have a look at how large language models work. So first the model is trained on a massive data set of text such as books, articles and websites. So this data set is used to learn patterns and relationship in language. The text is then broken down into individuals words or tokens which are used as an input for the model and then each token is converted into a numerical representation called an embedding which captures its meaning and context. The embeddings are fed into an encoder which uses a series of transformer layers to analyze the input text and generate a contextualized representation and then the decoder generates output text one token at a time. So based on the contextualized representation and the model's understanding of language patterns, the model can generate text in various forms such as continuation of prompt, text completion or even entirely new text. And the model can be fine-tuned for specific tasks such as language translation or text summarization by adjusting the weight and biases of the neural network. I hope the working of a large language models are clear to you. But to help you understand better, let me explain with a simple example. So think of a large language model like a smart parrot named Polly. And Polly lives in a busy tone where she learns to talk by listening to people chatting. She picks up words and praises from their conversation. Just like we learn from hearing others speak. So Hi's training begins when she listens to the people around her chatting, telling stories, and sharing news. She pays close attention to the words they use and how they are put together. So just like Poly listens for individual words and phrases. The text is broken down into small chunks called tokens. Each token represents a word or part of a word making it easier for Poly to understand. Poly not only learns words but also understand their meaning and how they are used in different context. She associate words with their meanings just like we do with pictures. When Poy hears a conversation, she process the words she hears and makes sense of them using her knowledge of the language. She can understand the flow of conversation and respond appropriately. And then police responses are like pieces of a puzzle that fit together to form a meaningful conversation. She uses her understanding of language patterns to generate responses that make sense in the context of the conversation. Po can generate responses based on what she's learned from listening to conversation. Whether it's answering questions, sharing information, or telling stories, poly can speak fluently like a human. And just as a poly learns to mimic different voices and accents by listening to other people, the language model can be fine-tuned for specific task like translating between languages or summarizing text. What is prompt engineering for code generation? Prompt engineering is a process that creates a specific prompts or instructions for AI language models to generate a code snippets or scripts. It involves defining objects, using relevant keywords, providing examples, and being specific and concise. This process enhances the accuracy, efficiency, and relevance of code generation tasks performed by AI models. understanding how prompts LLMS can lead to more powerful and efficient applications. Now let's understand the principles of prompt engineering. These principles provide a basic guidelines that can create a consistently used to increase prompt engineering effectiveness when it comes to code generation tasks. First one is clarify objective and understand task or goal. And second one is utilize keywords and specificity. And the third one is provide examples for context. Next one is conciseness and relevance. And the last one is encourage creativity and adaptability. Let's understand them in brief. Here understanding what you want from code output including inputs, outputs, evaluation criteria and any constraints or difficulties is essential before creating a prompt. It is easier to develop prompts that accurately guide the model towards a desired outcome where there is a clarity. And the second one is utilize keywords and specificity. Including a appropriate keyword in a prompt helps to communicate the specificis of the task to the model. By avoiding inconsistency and using a clear language and instructions, you can make sure that the model produces a precise and focused code rather than requesting a function to process data. For example, be explicit about the kind of data and expected actions. And the next one is provide examples for the context. The model can better understand the expected output format and functionality by referring to the examples. Prompts are made more understandable by providing a specific examples of the desired code which helps the models to understand the task. This will increase the probability of producing a code that is in align with the expectation. And next one is conciseness and relevance. The prompts needs to be brief concentrating on relevant information that is crucial to the assignment and excluding unnecessary informations. Code generation is made more efficient by the model. Simplified decision making process and clear and concise prompts reduces the confusion. Removing irrelevant informations lowers noise and improves the timely efficiency. Flexible prompt formulation allows for experimentation with the various strategies such as linguistic structures, constraints or templates, continuous improvements in prompt by tracking a model outputs and iteratively improving prompts. This design creativity maximizes the code generation results by adapting to a various scenarios. Now how prompt engineering is employed in various tools for code generations. First one is GitHub copilot. Based on given prompts, GitHub copilot suggest completions, creates documentations and suggest new features to help developers write code. And second one is Google AI code daven. This helps developers to write code in a variety of programming languages by using prompt engineering producing code snippets in line with a predefined prompts facilitating a variety of tasks from web development to a natural language processing and machine learning. And the third one is open AI codeex. Codex assist developers with a coding task in a variety of domains such as data science, web development, and game development by utilizing prompts engineering. It creates a code in a number of programming languages based on the precise instructions that users provide. Now, prompt engineering is crucial for guiding AI models in generating code accurately. Let's explore practical examples across different complexity levels. First we will try for universal starter code that is hello world program in Java. So for that we have to give a prompt like generate

### [5:46:44](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=20804s) OpenAI o3-mini Model

a hello world program in Java. Hit the enter button. You will get a code of hello world program in Java. Next we will look into the basics one that is sum of two numbers. For this we have to give a prompt like generate a function in Python that takes two numbers as input and returns their sum. So now you can see here the Python code is generated with a function name add numbers. So now we have seen how easy level task works. Let's move on to the medium level examples. We'll try for to turn the commands into code. First we'll create a list of countries and then generate a list of their respective capitals. After that merge the list to create a dictionary mapping each country to its capital. To get this we have to give a prompt as generate Python code to create a list of countries and generate their corresponding capitals and combine them into a dictionary. Mapping countries to capitals. Here is the generated Python code for given prompt. Here you can see the country's name and also generated a corresponding capitals. So this function define generate capital which gave us dictionary mapping countries to capitals. Next one complete function or next line. If I want to complete a function to calculate the area of rectangle, then we have to give a prompt as write a Python function named calculate rectangle area that takes the two parameter as length and width and returns the area of rectangle. And to include the comments, we just have to type it as include comments to explain each step of the function. So in this code you can see the function name which we have given in the prompt that is calculate rectangle area with the parameters length and width. You can also see the comments here. If you use this code, you can get a area of rectangle value. Next, we will try for MySQL code generation. To get a names of employees, we have to give a prompt like generate a MySQL query to retract the names of all employees from the employee table. This also generated a MySQL query with a select statement. You can see here the last example is about how to get a explanation of generated code. To get explanation of a code, you have to give a prompt as provide a line by line explanation of the Python function named calculate factorial which takes a parameter y and returns the factorial of n. Here in this code you can see a function named so-called calculate factorial which we mentioned in our prompt. After that you can see a line by line explanation of a code. This is how you can get the explanation of a code. Embarking on the creation of AI persona chatbots opens up a thrilling and financially rewarding chapter in the realm of artificial intelligence chatbots. This innovative venture stand outs because it normalizes the field of AI removing the barrier of technical expertise. In other words, you don't need to be a programmer or have any coding background to dive into this creative process. Platforms like Flowwise are at the forefront of this revolution, offering intuitive tools that simplify the creation of this sophisticated chat bots. With such platforms, the process becomes accessible to everyone, empowering users to bring to life digital avatars of well-known personalities, celebrities or influencers. These virtual entities can serve various roles, acting as personal assistant, ready to manage task or offering guidance and tailored advice to users needs. This time, let's build a chatbot using prompt engineering. In this tutorial, we'll craft an AI personal chatbot inspired by Steve Harvey, offering a tailored advice on life motivation and personal growth. We'll start from the ground up with the flow wise, guiding you through each step to build your own AI post in our chatbot. But before we begin, kindly consider subscribing to YouTube channel and hit the bell icon to stay updated on the latest tech content from EDA. Also visit the dera website for training and certification courses the link to which is in the description box below. So coming back to our video firstly we need to equip our systems with some tools. So let's proceed with installing flow wise. Now in order to install flow wise we need to visit the flowise official website. So let's check that out. So I'm going to type flow wise. Okay. So here's the official website of the flow wise. I'm also going to add this link to the description. You guys can also access it from there. Then in this official website, you can see the GitHub link over here. So we'll go on GitHub. Then we'll scroll down and here we see the quick start option and this quick start we have all the steps to start with the flow wise. So firstly it says to install NodeJS. So we'll do that. So over here based on your system we can just download the NodeJS. So I have already downloaded the NodeJS in my system. So I'm not going to repeat the step but you guys can follow on that. Then again over here these are the commands which we have to type on our command prompt.

### [5:53:55](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=21235s) What is Agentic AI?

And over here the first command that you see on your screen is to install the flow wise. Okay, I have already installed flow wise in my system but you guys have to follow this step if you are new into the flow wise. I have already installed a flow wise in my system. But for the first timers, you have to put this command in your command prompt. So I will go directly with the start flow wise. So this command I'm going to copy and I'm going to put it on my command prompt. You might have to wait for some time. Okay. So over here we can see the flow wise is listening at port 3000 and data source is being initialized. So after getting these two messages you can directly go to port 3000. So we'll go into that and here is the flow wise user interface in front of us. Okay. So very firstly we have to start gathering the data to feed into our chatbot. Okay. So you can choose any of the mediums. You can refer any blogs, any of the YouTube videos or anything. For us, we are going to take some Steve Harvey motivational videos. So we'll go to YouTube and over here I'm going to type as Steve Harvey motivation. We'll try to look for longer videos because the larger the transcript is, the more accurate your data is going to be. So, this is pretty long. So, in order to download the transcripts, what you have to do is just copy the link of this video. Copy. I'm going to mute it. Not this one. And then we need to go to the site YouTube transcripts. Then over here we need to paste a URL. Go. And over here you have the access to the complete transcript. So what we're going to do is just copy the transcript from here and then we need to save it to our system. So I've already prepared a folder called transcripts and you can see two transcripts I have already written. So you can see the transcripts in any of the documents. I have put it in text documents. You can even prefer as PDFs or docs or anything. Now this was for the previous chatboards that I have prepared. So let's delete those because I don't need them. And we'll prepare a new one. So, new text document I need and let's save it as Steve Harvey part one. Yes. So, over here I'm going to paste the entire transcript. Control S. It got saved. Yeah. And we're ready to go. For this particular video, I'm going to take one more video of Steve Harvey. So, let's search one more video. This already took Any other? Yeah, this one looks good. Just pause it for now. Copy link again to the same site. We're going to paste the URL. Yeah. And we going to copy the entire transcript. And again, we're going to go to our system new text file. Let's give it Steve Harvey part two. Open. Ctrl + V. Control S. Yeah. So after downloading this transcript, let's go back to our flow wise. So here we have flow wise. Then I'm going to click on add new. And here is the canvas where we are going to make our chatbot. So in order to start that go to add nodes go for chains and here we have conversational retrieval QA chain. Just drag and drop it to the canvas. So this conversational retrieval QA chain is a tool that is used in creating chatbots which helps the chatbot to find and provide answer to the questions in a conversation. You can think of it like a library system inside the chatbot that helps in understanding what you're asking and then search through its knowledge to give you best possible answer. This chain is a sequence of steps that the chatbot follows to make sure it understands the questions correctly and retrieves the right information to help you. And over here in the conversation retrieval QA chain, you can see a chat model, vector store retriever and memory as the inputs. And the final output is a conversation retrieval QA chain. So let's find the input of these. Okay. So our first input is chat model. So we are going to go to add nodes and we'll look for chat models over here. And for this chat model we are going to choose chat open. Again drag and drop. We're going to put it here. Now we can check out the output as chat open. We are going to connect its output to the input of conversational retrieval QA chain especially to the chat model section. Yes. Now let's check the second input which is vector store retrieval. So we're going to go to plus nodes. Then we're going to look for the vector stores and where is it? We need in memory vectors too. This one. So drag and drop over here. Now this in memory vector store is actually a database that our chatbot is going to follow through. In this document section particularly we are going to put our transcripts. So let's connect its output to the vectors to retrie input of conversational retrieval Q. So as I already told you in this document section we are going to store the transcripts that we have downloaded. So how to do that? Again add nodes go to document luders and then select folder with files. Drag and drop put over here. Output to the input. Now in the folder path section we have to put the path of our transcript. So over here Ctrl + C and again to our flow wise folder path Ctrl + V. Okay. Now we can also see one input as text splitters. So let's also take care of that. So we go down here we have text splitters and we're going to take cursive character text splitter. Drag and drop over here. The chunk size is already 1,000 which is good enough. And chunk overlap let's put it to 200. Again it's output to the text splitter input of folders with files. Yeah. Now we have one more thing called embeddings. So we are also going to add embeddings. So okay over here we have embeddings and we have openi embeddings. We're going to put it over here. Yeah. So our chatbot is almost ready. So over here you can see this connect credentials spot. So let's also check that we have to add a credential and for that we are going to take the API keys of the open AI. How to do that? Let's check that out. So we are going to go at open website. So I'll take this platforms dot open AI accounts and API key this one. Now for the API keys so we are going to go to platform openai. So this particular link I'm going to paste it to the description and get it from there. And here we have the access to the API keys. So just click on create new secret key. You can type anything like test key or anything like that. Create secret key. And it's going to generate a secret key for you. So I'm just going to copy that. And then I'll come back to the flow wise. Yeah. I'm going to save it. Let's save that as AI Steve Harvey. Save. Okay, our chat flow got saved back and go to credentials. Let's set this one. Just go to add credentials. Put here as open AI API. No. And here it ask for credential names and open API key. So just paste your API key and let's put here as open AI. Add. Yeah. So here we have our credentials. Now again we are going to go to our chat flow and over here in the connect credentials part we're going to paste as open yet one in our chat model and another one we have in our embedded section. Again we have open. So our chat flow is fully ready. So again we going to save it and then in go to the section of chat. So it already shows a message. Hi there. How can I help you? Firstly I'm going to say hello. Now it's going to take some time and it's going to show error. Okay it showed how hello can I assist you? I'll tell him to introduce yourself. Okay. I am an EI assistant. I'm here to provide you with information assistant. So our chatbot is kind of working but we want our chatbot to have a personality of Steve Harvey. Now in order to give that personality of Steve Harvey, let's go to the additional parameters of the retrieval QA chain and over here you have a space. Now put a prompt similar to like this. So this is a prompt like

### [6:04:59](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=21899s) Introduction to Midjourney

you are Steve Harvey and American television host. All this I've gathered from different sources like Wikipedia or like Steve Harvey personality blogs or anything like that and get it from there. So I'm going to select it all. I'm going to copy and I'm going to paste it over here. You can also pause the screen and read the full prompt. But what I want you to read is the last line because it's important. So the last night it's like if you have no relevant information within the context just say hm I'm not sure or don't try to make up an answer never break the character again the same thing answer the user question as you are Steve Harvey and only answer questions found in the context if you don't have any answer just respond with something like I have no idea or could you please rephrase that or hm I'm not sure so yeah so this prompt will help a model to stick to the relevant information and answer it as Steve Harvey. So again we are put Okay. So firstly we need to save it. Yeah we saved it chat again we're going to put something like hello. Okay. Hi there. Then again I will say introduce yourself. Okay. So here you can see now I'm Steve Harvey and television host, producer, actor, comedian. I'm known for my hosting shows like Family Feud and all like whatever we have given in the prompt. So now I'll tell him to you know just tell me a joke. Okay, sure thing. Here's a joke for you. Why did the scarecrow win an award? Because he was outstanding in this field. Okay, let's tell him I need a motivation. What is I need a motivation? I need some motivation. I'll tell as I am feeling lazy. So here we have our answer. Don't worry, I've got you covered. Remember success requires a tremendous work ethic and faith. Imagination is the key to achieve your dreams. So dream big, believe in yourself and take action to make those dreams a reality. You have the potential within you to achieve great things. Keep pushing forward. Okay, so it is working properly. So our chatbot actually got a personality of Steve Harvey. Agentic AI is transforming industries by allowing machines to learn, adapt, and evolve independently. Similar to live organisms, unlike traditional AI, these intelligent agents investigate, optimize, and develop solutions over time without requiring direct human participation. Recent advancements include OpenAI's deep research, which automatically analyzes massive amounts of data to provide detailed reports, and Google's Gemini 2. 0, which improves AI's capacity to plan and reason across different data types. Service Now's AI agent orchestrator is transforming enterprise automation by coordinating many AI agents to address difficult business concerns. As these systems become more powerful, they have the potential to unlock ideas beyond the human imagination ranging from wind turbine blade design to AIdriven company management. With that said, hello everyone. In today's video, we will explore agentic AI in depth from how it works to its potential applications and everything in between. So let's break down what agentic AI is and why it's becoming a gamechanger in the world of artificial intelligence. Agentic AI denotes artificial intelligence systems capable of autonomously executing actions to attain designated objectives unlike reactive AI which only responds to the inputs. Agentic AI is proactive capable of planning, adapting and making decisions autonomously. So let's explore deep into agentic AI and see its capabilities. Agentic AI is a type of artificial intelligence that exhibits autonomous behavior, enabling it to take actions and operate without continuous human guidance. It is goal-driven, actively working towards achieving specific objectives rather than passively responding to inputs like reactive AI. And with advanced decision-m capabilities, it can evaluate multiple options, select the optimal course of action based on current conditions and acquired knowledge and adapt its strategies dynamically in response to unforeseen changes in its environment. Moreover, agentic AI demonstrates proactiveness by taking the initiative to act rather than waiting for external triggers making it highly effective in dynamic and complex scenarios. Now let us see its relevance in the current AI market. When AI systems can act autonomously to accomplish predefined objectives, we call that agentic AI, making it highly relevant in the current AI market. Its autonomy allows it to operate without continuous human guidance, making decisions and adapting dynamically to achieve objectives. This capability is complemented by its advanced problem solving skills, enabling it to evaluate complex situations, strategize and respond effectively to challenges. However, the growing adoption of agentic AI also rises important ethical considerations such as ensuring responsible behavior, minimizing unintended consequences and maintaining transparency in its decision-making processes. Now that you know about agentic AI, so let us discuss how it differ from other AI systems. Agentic AI differs significantly from other AI systems in its autonomy, decision making and adaptability to achieve long-term goals. Unlike reactive AI which performs predefined task only when prompted such as spam filters or image classifiers, agentic AI takes the initiative and operates independently. It also contrast with the generative AI which focuses on creating content like chat GPT generating text but it is not goal-driven by combining autonomous behavior, strategic decision making and the ability to adapt dynamically. Agentic AI stands out as a powerful system designed to achieve specific objectives in evolving environments. Now since we know a bit of differences, let us see the comparison between generative AI and agentic AI. AI differ in several key aspects that define their functionality and applications. Generative AI is primarily focused on creation, excelling in output focused tasks such as generating text, images or other form of content. Its adaptability is limited as it relies heavily on prompts for guidance and lacks the ability to operate independently. In contrast, agentic AI emphasizes autonomy, making it goal-driven and capable of dynamically adapting to changing environments. Unlike the prom dependent nature of generative AI, agentic AI is self-directed, enabling it to take the initiative and execute strategic task effectively. These differences highlight the complimentary roles of both AI types in addressing distinct challenges. Now let us see the impact of agentic AI on various industries. Agentic AI has had a profound impact across various industries transforming operations and solving long-standing challenges. Autonomous logistics systems such as those in Amazon warehouses have significantly improved operational efficiency by 30 to 40%. In healthcare, AI enabled surgical robots like the Davinci system have performed over 10 million less invasive procedures worldwide, enhancing precision and patient outcomes. Scientific advancements have also been transformed by systems like Deep Minds Alpha Fold, which successfully solved the decades old protein folding problem. On a global scale, the World Economic Forum predicts that by 2025, AI will displace 85 million jobs while creating 97 million new ones, reshaping the labor market. And in the energy sector, AI powered smart grids can reduce electricity waste by up to 10%. Promoting greener energy solutions. Additionally, over 90 countries are investing in AI enabled military technology to modernize their defense systems, showcasing the strategic importance of agentic AI in global security. Now, let us see the applications of agentic AI. Agentic AI is transforming various industries by enabling systems to make autonomous decisions, adapt to changing environments, and achieve specific goals. Autonomous vehicle powers self-driving cars and drones to navigate roads, avoid obstacles, and make real-time decisions as seen with Tesla autopilot and autonomous delivery drones. In robotics, agentic AI allows industries healthcare and exploration robots to perform complex task independently as demonstrated by Boston Dynamics robots used in logistics and rescue operations. Personalized virtual assistants like Google Assistant and Amazon Alexa leverage agentic AI to predict user needs, manage schedules, and execute task without direct commands. And in gaming, adaptive AI agents enhance the experience by creating challenging humanlike opponents such as Alph Go and AI boards in the real-time strategy games. In healthcare, agentic AI supports personalized treatments, accurate diagnostics, and surgical assistance with examples including AIdriven surgical robots and systems for remote patient monitoring. These applications demonstrate the transformative potential of agentic AI across diverse domains. Agentic AI is making a significant impact across various industries by enabling autonomy, adaptability, and efficiency in diverse applications. In finance, it powers algorithmic trading systems and fraud detection tools, optimizing financial operations such as managing investment portfolios and identifying fraudulent activities. In smart cities, AI systems manage energy consumptions, optimize traffic flow and enhance public safety with examples like smart traffic lights adapting in real time and autonomous energy grid optimization. In space exploration, autonomous spacecraft and planetary rovers such as NASA's Mars rovers perform exploration task independently. In education, AI powered tutors like Carnegie Learning provide personalized instruction by adapting to individual learning styles. In military and defense, autonomous drones and surveillance system improves situational awareness and decision making such as AIdriven surveillance drones in defense applications. Now let us see the challenges and risk associated with agentic AI. While agentic AI offers tremendous potential, it also faces several challenges and risk that must be addressed to ensure its safety and ethical deployment. So one key concern is misalignment with human goals where AI system may pursue objectives that conflict with human intentions due to poorly defined parameters or intended unintended consequences such as autonomous robot prioritizing efficiency over safety. Ethical questions arise regarding accountability and decision-m demonstrated by the challenge of determining who is responsible when an autonomous vehicle causes an accident. The complexity of decision-m in agentic AI can also lead to a lack of transparency making it difficult to understand or explain its actions particularly in sensitive fields like healthcare or finance. Ensuring safety and reliability is another challenge as AI systems must operate effectively in unpredictable environments such as autonomous drones encountering extreme weather or medical failures. Additionally, agentic AI systems often require substantial computational resources making their deployment costly as seen in advanced robotics and self-driving cars. Security vulnerabilities pose further risk as autonomous systems could be targeted by cyber attacks potentially leading to harmful consequences like the manipulation of autonomous vehicles. Lastly, overdependence on AI may reduce human oversight or lead to skill degradation in critical areas such as relying too heavily on autonomous systems for medical diagnosis without human validation. These challenges highlight the need for robust design, rigorous testing and ethical frameworks to mitigate risk and maximize the benefits of agentic AI. Now let's see the future of agentic AI. AI is set to be transformative with advancements across various domains influencing its deployment. Future systems will exhibit increased autonomy and adaptability, enabling them to make a complex decisions in real time and operate effectively in dynamic environments without human intervention. The integration of agentic AI with advanced technologies like quantum computing, IoT, the edge computing will further enhance its capabilities allowing for faster decision making and realtime processing at the edge. These systems will have the widespread applications in sectors such as healthcare where they will enable autonomous medical diagnostics, personalized treatment plans and robotic surgery. Climate action with advanced systems for environmental monitoring and response and space exploration where smart rovers and spacecraft will carry out missions on their own. As these technologies evolve, ethical concerns and accountability will need to be addressed. promoting the development of regulatory frameworks to ensure responsive AI usage. Additionally, agentic AI will foster human AI collaboration, enhancing productivity and creativity in the fields such as education, engineering, and research. So, software is changing fast. Just a few years ago, building an app meant writing every single line of code by yourself. But today we don't even need to look at the code to create an app. So this is what we call wipe coding. The term wipe coding was introduced by Andres Karpati, a co-founder of Open AI in 2025. According to what he said, it lets anyone not just developers build software, speed up the whole process and makes iterating on ideas easier than ever. But this wipe coding is the future or just another passing trend. Comment your thoughts in the comment box below. So now that we have introduced the concept of VIP coding, let's dive into how this innovative approach actually works. So yeah, VIP coding transforms the traditional programming process by leveraging artificial intelligence to generate code based on the natural language description. Here's the breakdown of the workflow. So the first step is conceptualization. Begin by clearly defining the functionality or the application you want to create. This involves articulating your ideas in natural language detailing the desired features and behaviors. So the next step is AI interpretation and code generation. So input your description into the AI powered coding tool. The AI processes your input and generates the corresponding code effectively translating your natural language instructions into the functional program. So last step after AI interpretation is review and refinement. So this step basically includes examining the AI generated code to ensure it aligns with your expectations. So you may need to provide additional instructions or adjustments to refine the code. This iterative process continues until the final product meets the requirement. So yeah, VIP coding enables the individual even without extensive programming experience to develop a software by focusing on highle concepts while AI handles the detailed coding. However, it's crucial to review and understand the AI generated code to ensure the accuracy and the functionality. So yeah, now that we have explored how VIP coding operates, let's examine some of the leading tools that facilitates this. So several AI powered tools have emerged to support VIP coding enabling the developers to translate the natural language prompts into functional code. So here are some notable examples. The first one that we have is GitHub copilot. So, GitHub copilot was developed by GitHub in collaboration with OpenAI. Copilot integrates with the popular code editors to provide the real-time code suggestion based on the natural language description. And the second one that we have is Replet, an online coding platform that offers AI assisted development features allowing the users to build and deploy the application directly in their browsers. The next tool that we have is cursor, a code editor designed for AI assisted development, featuring the real-time code suggestions and natural language processing capabilities. So yeah, these tools simplify the potential of VIP coding to make software development more accessible and efficient by allowing the developers to focus on highlevel concepts while the AI handles the detail coding. So now that we have explored the tools facilitating VIP coding, let's dive into the key benefits this AI assisted programming approach offers. So the first benefit that it offers is accelerated development cycle which means it automates the repetitive task enabling the faster prototyping and deployment. The next benefit is enhanced coding consistency and quality. That means AI tools enforce coding standards resulting in more reliable code bases. The next benefit that we have is lower barrier to entry which means it allows the individuals without extensive programming experience to create application using natural language prompt. And the next benefit is increased productivity. It basically frees the developers from mundane task allowing to focus on innovation and strategic aspects. And the last benefit that we have is it facilitates the rapid prototyping which means it enables the quick experimentation and iteration frosting the innovation and agility in the development. So yeah now that we have explored the benefits of VIP coding it's also important to consider the challenges and the considerations associated with it. So yeah now let's have a look at the challenges. So the first challenge that we might face is conceptual understanding. AI tools may generate code that functions correctly but it lacks a deep understanding of specific project requirements potentially leading to the misaligned outcomes. The next challenge that we might face is security vulnerabilities. AI generated code can introduce the security flaws if not thoroughly reviewed as these tools might adhere to the best security practices. The next challenge we might face is over reliance on automation. So dependence on AI for coding task might dismiss the developers critical thinking and problem solving

### [6:23:31](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=23011s) How to use Midjourney?

skills leading to the decline in code quality and innovation. So the next challenge that we might face is ethical and legal concerns. The use of AI in code generation raises issue related to the intellectual property rights and potential biasness embedded in the trading data. So yeah the next challenge that we might face is quality assurance. AI tools might produce the code that appears correct but doesn't effectively achieve the intended goals underscoring the need of regress testing and validation. So yeah, addressing these challenges requires a balanced approach combining AI capabilities with the human oversight to ensure the code quality, security and alignment with the project objectives. So now that we have discussed the challenges and considerations of VIP coding, let's move on to the practical demonstration. So to effectively showcase w coding within the visual studio code, we will integrate AI powered extension that assist the code generation and completion. So yeah, we are going to integrate GitHub copilot in our VS code and then we will generate a small code using GitHub copilot. Now let's have a look at the demonstration of wipe coding. So we are going to use GitHub copilot as AI assistant tool here. So to integrate GitHub copilot into your VS code what you need to do is you need to visit this extensions here. After that type GitHub copilot and here comes the extension. You will just click on the install button here. I have already installed GitHub Copilotent into my VS code. So I am not getting the option to install. So here you will get the button to install. So after clicking on the install button so you will have to connect your GitHub copilot with your GitHub account. So it will redirect you to your GitHub account and there you just need to authorize it and after that you will be able to use GitHub copilot into your VS code. So after authorizing your GitHub copilot with your VS code you will get this option here which is of GitHub copilot. You'll just click here and you will get this interface. So here you can ask copilot anything. You can ask to generate a code for anything here and it will generate your code. So let us see an example. So I'll just give a prompt here. Generate a code for factorial in Python. I'll just click here and it will start generating the code. Here I can see the code. So this is my code. Now let me ask one more thing. I have written give me code for form using HTML and inline CSS. So I'll just click and it will start generating the code for form. What we'll do is we'll copy this code and we'll try to run in our VS code. So here we can see this is the code. It has created the form along with the styling. So what we are going to do is we are going to open a folder here. We'll just create one new folder. Let's name it as demo and select folder. I'll click here to see my code. So I've opened my folder and the code is gone. So I'll again ask the copilot to generate the code. So I've again asked code and let's see. So yeah, it has generated the code. I have created one folder here and I will just make a file here that is demo. htm. HTML. So what we will do is we will copy this code from here. We have copied it. We will just paste this code and save this code. After saving this code, we will just click on go live. And you can see it has generated the form using HTML and CSS. We know that it has used CSS because we can see the perfect box styling, the color of the submit button and everything. So yeah, this is an example of vibe coding. We just gave the prompt to GitHub copilot in the natural language and then copilot generated the code according to our natural language prompt. Imagine how it would be for you if you got a virtual site pair alongside you while you're coding. The site pair is helping you to automatically generate code and improve your code by making it faster and more efficient. You can chat with him. You can ask questions, ask suggestions, and much more. Sounds good, right? Well, GitHub Copilot is that AI site pair of yours that makes it all possible. It is an AI coding assistant that helps you to write code faster and with less efforts, allowing you to focus more energy on problem solving and collaboration. GitHub copilot is proven to provide 55% faster coding and also proven to increase developers productivity and accelerate the pace of software development. If we take a look at the industry standards then we can see that around 50,000 plus businesses have adopted GitHub copilot. One in three Fortune 500 companies use GitHub copilot and 55% of developers preference Git Copilot. Now, if you want to learn more about this and you want to know how to use this amazing AI tool, then you just clicked on the right video. In this GitHub copilot tutorial, you will learn how to use GitHub Copilot and how it can be your trusty navigator guiding you on your flight path to a coding career that really takes off and goes up high. Now, very firstly, let's check how you can install the GitHub copilot into your system. So very firstly we have opened our visual studio code and in order to install the GitHub copilot go to the extensions and as an extension type GitHub copilot. You can see over here now I have already installed GitHub copilot in my system but over here you can just click on the install button and it will start installing in your system. After that once you have installed GitHub copilot you have to create an account in the GitHub. So over here you can see the GitHub link. Click on that and we'll redirect it to a page of GitHub. Now I've already installed GitHub. Now one more thing comes is that this GitHub copilot needs a paid subscription. So if you want to check the paid subscription for the GitHub copilot, we can visit this particular website. I'll be putting the website in the description. You can check it out from there. Now once you come to this website which is the GitHub copilot official website you can scroll down and over here you can check the subscription plan for GitHub copilot for the $10 you can start a free trial over here then according to your usage you can check whichever plan you want. So once we have installed getup copilot and we have subscribed to the getup copilot it's time for us to use the gup copilot for our coding. So once we are done with the installation part of GU copilot you can check this icon over here of GitHub copilot. This icon basically portrays that your GitHub copilot is active and it's ready to run. And in case if you don't see this icon just restart your visual studio code and once you restart you're going to see this icon over here. Now I have already created a folder of GitHub copilot in which we are going to store our files check the capabilities of GitHub copilot. So before we jump into the coding part let me show you something interesting. So for example, let me create a python file. So I'll put edurea dot python. Now if I put a comment over here, comment and question like what is inheritance in Python. So as you can see that it is

### [6:31:15](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=23475s) GitHub Copilot

already predicting my question. I'll put enter and it's going to generate an answer for me. You can see the answer over here. If I want to accept that answer, I have to just click on tab button and for the convenience I'll put alt + z and over here you can check the answer for this particular question. Inheritance is a mechanism in which one class acquires a property of another class and has also given us an example. Now if I check another question so see over here it's also predicting my next question. It's already asking a example of inheritance in Python. So if I accept that I would enter and it gave me the example of inheritance. It also gave me a piece of code. So for now I won't get into the coding part. So this is how you can check that GU copilot is also good in predicting our next questions and predicting our next moves. So this is how you can also put a Q& A with GU copilot. So now let's move to the coding part. Let me make a index. html file. I'm going to put the tags over here. Then title as let's say edure and in the body let's say if I make a container. Now over here you can also see the H1 and par that GitHub copilot is suggesting us for the title edure. So the text which is appearing over here is called as a ghost text. Now if I want to accept this ghost text I have to click on the tab and if you just want to reject just click on escape button. So for this it has already given an h1 and a par for this I'm going to put alt + z to put it in a better format. Now let's add a stylesheet into that. So if I put style dot CSS and in style dot CSS I want H1 tag as let's say blue in color. Now I have to link the stylesheet into our HTML file. So for that it's already giving a code for us. So if I click on tab button it's going to link the stylesheet. And then if we run this whole file you can see it has applied the stylesheet. It has given the heading and blue color welcome to derka and also given us the text. So here you can see how getup copilot is helping us with HTML and CSS. Now let's check some other capabilities and features of githup copilot. So after playing with the HTML and CSS using GitHub copilot let's check some more interesting features in GitHub copilot. So again we'll go to the py file which we created in the initial stages of our tutorial and you can see this option oftrl + i. Now if you press control + i it's going to open an option of inline chat feature in github copile. So let's say we give it a prompt of generate a code for calculating the number of days between two dates. Let's keep it simple. Once you press enter, it will start generating the code snippet for you. Now if you are satisfied with the code snippet, just click on exit. I can see the code snippet is appearing in your code editor. Now let's say there is some error in this particular code. So let's say if I remove this end date from here and I don't know what is the error in this particular piece of code. How to find it? Now in order to find this G copilot helps you with this particular feature of fix this. Now if I select the whole code right click I'll go to copilot and if I click on the fix this option it's going to fix the whole code by giving you the suggestion and giving you the solution for this particular problem that you are facing. So it already says that your problem is with the function is expecting one argument but two are provided. So if I accept this you can see again it puted the last argument over here which is of end date. Now get a copilot sometimes gives the accurate code snippet which you want and sometime it might differ from what you actually want. Now in order to do that just select the whole code and putt controlless enter. Now once you press control plus enter you can see GitHub copilot coming up with a lot of suggestions for this particular piece of code. Now if you're satisfied with any of the suggestion you can just accept the suggestions and it's going to implement the particular suggestion for your code. Now let's check one more feature. Now you can even ask a copilot to explain the whole code separately. Now in order to do that again go to the co-pilot and go to this option of explain this. Now once you click on the explain this option. Now you can see in the chat box it comes up with the whole explanation of this particular code. Now this chat box is very much similar to the other AI chat boards that you use like chat GPT or Google Germany or Google board. Now you can even ask for more questions to this chat box. So let's say if I ask him write a code for bubbles algorithm. Let's see. So here you can see that it's coming up with a code snippet of bubbles algorithm. Now this is very much similar to what we do in chat GPT or other AI boards like Google Germany and all. But the main feature is that this chatbot is appearing in your code editor and you can just directly copy paste this particular piece of code. You don't need to go to the separate browser then you have to copy paste the whole code from there. So in short it makes your job more easy. Now to make it visually more appealing you can even drag this chat box anywhere in the code editor. For example if I put it over here you can see the chat box appearing over here. or you can make it appear anywhere in the code editor according to your convenience. Now after checking this basic features, let's check how this features of GitHub copilot actually help us while we build something. So for now let's try to make a simple chatbot using Python with open AI API and let's see how quickly we can complete building this chatbot. So very firstly we're going to start by importing OpenAI library. Now to install this OpenAI, you have to go to your terminal and type pip install open AAI. So after that let's put our API key. And now you can see that GitHub copilot is already ready with its suggestions in the form of a ghost text. So we're going to type and let me change my API key. Now in case if you don't know how to generate the openi keys then you can just simply visit the openi website and over here in the platforms you can generate your API key. I'll be putting this link in the description. You can go to this website and make your API keys. So I'm going to copy my API key and then I'm going to paste it over here. Then let's make a function of def chat with GPT. And you can see once we write the function it's already ready with the code and in this case we exactly want the same code response equal openti completion create engine 3. 5 instruct exactly prompt is from max tokens is 150 decent enough now what is this code about what is this function about let's understand now this function will generate text based on the input prompt using openi's language image model. The parameters for the completion model include the engine, prompt, and the max token. This engine basically specifies which version of the language model to use. In this case, it's using the GPT 3. 5 Turbo Instruct engine. And then it comes to the prompt. Now, this prompt is a text provided to the model to generate a response. It's what the user wants to chat about or to get information on. Then we have the max token. Now this parameter limits the number of tokens in the generated response. Here it is set to 150. It means that response won't be too long. Now once the response is received from the open API, the function returns the generated text. The dot choice. extrip part extracts the generated text from the response object and removes any extra white spaces from beginning and the end. Now let's move forward and create the main function. So we going to put if name dot main and you can see again it came with the host text which is exactly I want. Now what is this main function? Let me explain you. Now over here the code first check if it's being run directly. Then it enters a loop where it continuously prompt the user for input. If the user inputs an exit command, the loop breaks otherwise it prints the user input and the bot responses. So now let's go and run our chatbot. We're going to run it in the terminal. So over here I'm going to ask who are you? So over here you can see the generated response. It's like I'm opening HGB3 language model program to assist you with a variety of this. Okay, great. So you can see that our chatbot is working. Let's say one more question of like what is the difference between a class and a object. Let's see again it came with a answer. A class is a template or blueprint that defines the characteristics and behaviors of a type object where an object is an instance of class. So it typically answers the question which you want. Now we already witnessed that how easily we build this chatbot with just a few clicks by accepting the recommendations from the GitHub code variable. Now after the chatbot let's also try to make a linear regression model for us. So we'll start off by importing uh required libraries. So firstly I'm going to start by import numpy. It is already showing me over here. I'm going to click on tab. Then I don't need torch but from scikitlearn I need to import the data sets and the linear model. And then from scikitlearn I need mean square error. Yes, I do need mean squed error. So I'm going to accept that. And then I will start by loading the diabetes data sets which it's already showing me over here. So I'm going to tab and then it's going to give me a cool. So again I'm going to click on tab. Now what I want is yeah this is exactly what I want. I just wanted to use only one feature of it. So again I'm going to click on tab enter. It's going to give me a code for that. It's already did its slicing and then I'm going to print it. So print diabetes X and let's check the data set. This is the exact data set which we wanted or this one feature we are taking to train our model. So I'm going to delete that. I'm going to move forward by building our model. So after that we have to split the data into training and test. It already knows what we are planning to do. Put enter. Yes. So over here it is taking the first 80 elements for the training data set. So I was like so I'm okay with that. And then for testing it's taking the last 20. I'm even okay with that. Then for target I need training and testing. Yes. Again I'm going to split the target into training and testing. So diabetes by train diabetes test. Exactly what I want. And then I'm finally going to create the linear regression object or the model. So as you can see I'm just writing the comments and it is already showing me the code for that. So for this particular model I don't even have to code a single line or rarely maybe a line or two I have to code but everything get up copilot is suggesting me. So I'm going to click on tab then I have to train my model exactly what I wanted to do enter. So it's going to fit diabetes extreme diabetes by train enter. And after that I'm going to make some predictions again. Exactly. Enter. So, diabetes by predict good. I'm going to predict the X testing set. So, this is exactly what I want. And then I also want the coefficients. It's giving me the code to print the coefficients. And then again the mean squed error I want. It's giving the code for mean squed error. Tab enter. And then I'm going to give the plot outputs. For that we need the plt. scatter exactly. And then plt. plot plot for that one straight line and then I want plot to show plot show. So a linear regression model is ready and as you can see how much coding I did it by myself or manually. Gup copilot made the job as easy as it can be. So I'm going to run the Python file in the terminal. So you saw how conveniently and quickly we made this projects using Gup Copilot. However, these were just the basic projects we built in order to give you an idea about GitHub copiloting. Have you ever dreamed of creating stunning visuals without being a professional artist? This video will help you with everything that you need to master Journey. From generating basic photos to utilizing its advanced features, Midjourney is a generative AI tool that turns your text description into stunning images. Now let's see how it industry can use my journey. Users can generate mock-ups with simple text prompt and visualize idea before the development. Marketers can create eye-catching visuals for the campaign. Test multiple concepts and quickly personalize content for their targeted audience. Mjourney even help create illustrations, build fictional worlds, visualize data for content creator and help them go beyond. Mjourney is a gamecher streamlining design, marketing and content creation within the IT industry. Before we go and explore it, let's see some of its products. A realistic portrait of a woman holding a white cat, an anime- like character with powerful effect, drawing of an ideal scenery and cinematic view of soldier in middle of war. Let's see why use majour. I would just say three points. It's easy to use. Boost creativity. Explore different style and find the one you like. So to get started, let's see how to sign up for Mjourney. Go into your favorite search browser. Then go to search bar and type mjourney. Let's go ahead and click enter. So we'll see the majour's official website. Go ahead and click on that. There you'll see a landing page that will look something like this. To join my journey, we'll go ahead and click on join the beta. After clicking this, this will take us to their discord invite. For this you have to have one discord account. Since we have already created one, we'll go ahead and just sign into it. As soon as you join, you will see something like this. M journeys icon you will find on left hand side of your screen. That will look something like a sailing boat. Now to go ahead, we'll just scroll down and join any of the newbie channels. There you can see so many creation of so many people. This is an open channel. Anyone can come and write a prompt. they'll get their image generated right over here. But in this you'll lose your image as soon as you create it because there's so much content over here. So to go ahead and add MJbot to your server or directly message it, we have to go on left hand side and click on symbol that will look something like this. Here you can see MJIbot. Right click on it. There you can see the profile. Click on profile. It will open a menu like this. If you click on send message, it will directly take you to the Moneybot. There you can message directly and get your results. So let's go ahead and see our information. To do that, we have to type / info. Then just hit enter. Here we can see user ID, our subscription, and even the fast time remaining. Here we can see how many images we have created and fast images that has been generated by my journey. to go ahead and join my journeybot to a personal server. We have to create a server first. Let's go ahead and create a server. On the left hand side of your screen, you can see something like this. Click on it. Then you can select create my own. You can choose for me and my friends and go ahead and name it. For now, we'll name it Edorica server 3. We'll go ahead and create. Here we have landed on our page. Now to add bot to a server, we'll just go ahead and click over here. We can see my journeybot. When you click on it, you

### [6:48:34](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=24514s) What is Vibe Coding

can see an option of add app. Click on it and then you can choose a server you want to add it to. We'll go ahead and choose edric server 3. Then click continue. After that, you would have to select the permission you want to give my journeybot. Then go ahead and authorize it. Here we got success. After adding Mjourney bot to your server, you can see a message like this. Let's go ahead and click on Adora server 3. So, we'll go to Adora server. We can see the MJ bot has landed. Let's go ahead and create a first image. To do that, we would have to type in / imagine then hit enter. Here you can see a prompt like option. Now, following prompt, you have to give the description of the image you want. For now, we'll go ahead and type this. A bustling city street in modern era. Let's go and see what does mourney generates. So the image has been generated. Let's go ahead and click on it to get a better view. The images will be generated in the grid of four in order 1 2 3 and four. Below the generated image you can see multiple option like this with U and V where the number would correspond to the image. Here we can see a roll button that will look something like this. The U stands for upscaling and the V stands for vary. If you like a image for instance I like the image too. We'll go ahead and click U2. As we can see the image has been generated. This is the exact image that I have selected. the second number. If you like it, you can just go ahead and download it. To do that, click on open browser. You can zoom in, zoom out to get a better view and right click on it to save it. We'll go ahead and save it as an image. Just click on save and it will be downloaded to your disk. Let's go back to our discord. Here below the image, we can see multiple options again, but this time the options are different. Here we can see upscale settle and upscale creative. Both are used to upscale the current image but in a different manner. It will slightly upscale your image whereas creative will change something in the image. Vary V has multiple options subtle and strong. Strong will generate a totally different image but with similar reference whereas vary region if in the image you want to change something you can select it by the region and then change it. We will dive into that later. So let's go ahead and see zoom out. There's option of zoom out 2, zoom out 1. 5 and custom zoom out. If you want to zoom out, you can click on any of these. And if you want to customize your zoom, you can click on this. We'll dive into that later. Moving ahead, we can see multiple arrows pointing in different direction. What does this mean? Let me tell you. If you click on the arrow, it'll pan out the image in the direction which the arrow is pointing. If the arrow is pointing left, the image will pan out on the left side. If the arrow is pointing at right, the image will pan out on the right side. Similarly, on the top and the bottom. Now, let's go ahead and test out our zoom. We'll click on zoom of 2. 0 and see what it generates. The image has been generated. We'll go ahead and click into it. To get a better view, we'll upscale all of these This is our original image. Let's go ahead and see our zoomed out images. In a fourth image, these parts have been added which were originally not here. In a third image, we can see a different scenario over here. There's a zebra crossing. Whereas in fourth image, it is not. In every scenario, we can see a different vehicle or a different building. Nothing has changed from the original image. But to zoom out, it had to create new objects that we can see. Now let's go ahead and see pan out. To go to our original image, we can click on this and it'll take us to our original image. Now I want to see what's up there in that image. To do that, we'd have to click on pan out with top arrow. We go ahead and do that. It'll give us a prompt window. Now you can choose what you want. As we are talking about sky, we can have moon or sun over there or a plane flying. Let's go ahead and do a sun in cloudy weather and click on submit. As we can see in our newly generated images there is sun, there are clouds and even both of them. Now to create perfect prompt you need to use proper keywords and even give description as much as you can. For that we'll go ahead and generate this image in painting format. To do that we'll type / imagine and then prompt. Our prompt will be same as previous one and we have just added oil painting. Now let's go ahead and hit enter. Here the m journey has used vibrant color in oil painting. All the images look stunning. Now we'll see some of its advanced feature. One of those was released recently by Mjourney that was describe. Describe feature of Mjourney allows you to use a image to get its prompt. You can use the prompt to generate new images similar to that one. For that we'll go ahead and type /escribe. Now there are two option image and link. You can go ahead and click on image to upload image from your system. If you go ahead and click on link, you can provide link from anywhere that can be used by majourney to generate description for that image. For now, we'll go ahead and click on image. You can either drag or drop your picture or you can go ahead and select it. Now, I have selected the image. Just go ahead and hit enter as a reply to your image. Mjourney bot will generate several of its prompt that can be used to generate images similar to that one. Scrolling down, you can generate any of the images based on the prompt number given or you can imagine them all. Let's go ahead and imagine all. Here we can see all the images generated by all four prompts. Those are similar to the one we had given. Now let's go ahead and talk about settings in the midjourney. To check those type setting here we can select the model that we want for our M journey and so many other like stylus. Styus low, medium or high. Controls the amount of M journey's creative influence on a pictures. Next is public mode. Public mode allows your picture to be seen in Mjourney's public collection. Then comes the remix mode. Remix mode allows you to give extra prompt during variation in image. If you have turned it off, it won't ask you for prompt again. Just vary your current image based on its prompt. Then there is variation mode. When we choose variation, it will go ahead and check our preferences. Either it is low or high. Based on that, it'll give us a varied image. Then we have different modes. Relax mode, fast mode and turbo mode. All these three modes define the speed at which midjourney will generate your images. For relax mode, it will generate very slow. But for fast mode, it will use your fast but will generate faster. And for turbo mode, your results would be instant but consume more fast. You can click on reset setting and set your setting to default. At last, raw mode. Raw mode access a more literal interpretation of your prompt. Now let's go ahead and design some logos. I have chosen a logo for a bakery. Here the midjourney has given us some amazing logos. We can select any one of those and use it our own. Now let's go ahead and explore different parameters that MJurnney allows. For starters, we use a new prop. To add a parameter, we just need to add double slash followed by the parameter. For the first parameter, we would select aspect ratio. Now aspect ratio is the ratio between height and width of a image. Let's go ahead and type in 16x9 and hit enter. Now as we can see the aspect ratio of the image has changed from the previous ones. Those images

### [6:57:44](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=25064s) Top 5 AI Frameworks in 2025

were in square and this is a widescreen image. Now let's go ahead and discuss a different prompt that is negative prompt. For that we'll go ahead and type the same prompt. You can add multiple parameter in a single prompt. For negative prompting you use n o no and followed by the object you want to remove. We want to remove clouds. We'll type in clouds and hit enter. The clouds have been removed from our original image here. Cloud and cloud even here. But in our newly generated images, there are no clouds because it has taken a unique approach of making our dolphin underwater. So let's go ahead and check out the parameter that controls imagination of a majour that is stylus. Now we'll go ahead and type in same. For stylist the range is from 1 to,000 and to use stylus we would type single s followed by the value we want to zero being the lowest and th00and being the highest. We would go and type in 1,000 and see the result. This is a fine result and it is stunning. I am amazed. Moving to another parameter chaos. Higher value of this parameter lead to unexpected and unique outcomes. So let's go ahead and try it with the chaos value one. We get results something like this. Now let's move to using reference images to create our own images. For that, we need to go ahead and upload one of our photos. So, let's go upload a file. I'm going to use this picture. We'll go ahead and open a browser. Copy its link. For our next one, we would go ahead and type imagine. Paste the copied link. Then type the prompt we want to. near a lake with rocky mountain and cloudy sky. Here are our generated images. Now let's go ahead and see another parameter image weight. It controls the weight of referenced image on our newly regenerated image. values ranging from 0 to two. 0 being the lowest and two being the highest. Image weight parameter is given by IW and value I have given as zero. Let's hit enter. with the image weight being zero. The reference has not been taken into consideration. Now let's try it with value two. Our reference image is the first priority and the prompt is second. With that, let's see how to create stock images. To create stock images, we need to use words like realistic and describe as much as we can in that image. Photography of adult Caucasian female wearing a white shirt and black tie, smiling, cross-anded studio shot, cinematic look, clean background and white background. Adding all these now we'll just hit enter. You can see there are multiple keywords that represent stock image. So here are generated images. Let's go ahead and upscale first one. It's a stock image. It's a real looking. Let's create a second stock image in a group. Photo of a diverse team. Remember the keyword photo. team celebrating a successful project in a co-working space. Let's hit enter. We'll go ahead and upscale the fourth image. Definitely one. With that, we'll conclude our video. And now I'll show you in Mjourney how you can use. When you enter Mjourney website, you can see collection of photos like this that has been generated by multiple MidJourney users. So you can find all your generated images in archives. You can see all of these images right over here. You can join the rooms those are based on themes or read the images as you like. Mjourney has recently introduced Mjourney's web creation alpha that allows you to create images on the web. This is currently available only for the user those have created more than 100 images already. You can type your prompt directly here without giving any command like / imagine or anything. Let's just go ahead and type shark in mechanical suit. Now our image is being generated in create. We'll go ahead and check it out. These are the images of shark and mechanical suit. You can even vary both of these subtle or strong or you can use rerun button to reroll the images. You can even see the prompt that you have given for previously generated images by you. In today's modern digital era, we can witness the impact that AI has created on our world. Starting from healthcare to finance to transport, AI has played a huge role in developing these sectors. But do you know what is more interesting? It's when AI contributes to the field of art and can generate some really interesting art pieces just like this. And you know what is more even cooler. The part is if I tell you that you can generate these kind of images with just a single prompt. And the thing which makes it all possible is called midjourney. Bitjourney is the latest air model that combines art with technology blended with human creativity to produce stunning and unique art pieces. This tool became a gamecher for a lot of content creators and many of them are using it to either grow their Instagram pages or to generate YouTube thumbnails and many of the top companies are getting in on the action using it for all sort of design work. Now if you are interested in exploring more about this field of art and technology then you just clicked on the right video. Stay tuned till the end of the video to have a brief knowledge about this amazing AI tool called Midjourney. On July 12, 2022 the Midjourney image generation platform entered the beta. Midjourney Inc. is the company that owns this AI platform and was founded by David Holtz who was previously the co-founder of Leap Motion. This company has been working on improving its algorithm, releasing new model versions every few months. On December 21, 2023, the alphaation of version 6 was released, which can be considered to generate the most advanced and detailed images out of all its previous versions. But now the question comes that how does it work to generate such realistic and detailed images. This AI model uses machine learning algorithms to analyze and interpret the data. Now this data can be in the form of text, image, audio or video. And then using this data the algorithm generates new images by analyzing the patterns of the input data. And one of the most common deep learning algorithm it uses is generative adversial networks also known as GAN. This algorithm contains two main entities the generator and the discriminator. From random noise, the generator generates an image which is evaluated by the discriminator that whether it's a real or a fake image. After evaluating, the discriminator will send a feedback to the generator and using this feedback, the generator will increase its efficiency to generate more real images. The cycle will go on and unless the generator generates an image which the discriminator will fail to evaluate as a fake image and the generator will end up generating a completely new image. Another deep learning algorithm that it uses is recurrent neural network or RNN. It is a class of artificial intelligence model designed to process sequential data by leveraging its inherited memory function. It uses a feedback loop using which it remembers the previous inputs and then uses that information to generate new pieces of data. These algorithms work more efficiently when combined with human creativity. Now after knowing how it works, let's put our knowledge into practice. So turn on your computer screens and let's start with midjourney. So let us start with the tutorial part for the midjourney. So first of all open your browser and type midjourney. So over here the first link is the official link for mid journey. And once you click on that link this kind of interface is going to appear in front of you. Then click on join the beta. Now in order to access the midjourney you should have a discord account. And once you verify the discord account, you'll be getting access to midjourney. So I accept the invite. Now if I go down and click any of the newbies. So over here you can check out the community that is using the midjourney right now. So if we go down and there is a separate way of providing a prompt to the midjourney. So in order to provide a prompt we should use slash imagine and over here you can see the prompt and now I can write any prompt into it. So let's give a simple prompt and let's give lion with a crown in a sunset. So over here you can see that it's waiting to start and by the time you can see the other images that was generated by other people in this community. So ours 15% is done. We'll still wait and image is in the progress. It's 45% done. We'll still wait and you can see the other images that are being generated continuously and that's our image the lion with a crown in a sunset. Now to be honest I like the third image. So I'm going to click on U3 and over here you can check your image. Click on the image open in browser and from here I can download my image. Yeah, that's downloaded. Now the midjourney comes up with a subscription. So in order to check that, put slashsubscribe. Enter. Go to manage accounts and over here you can see the subscription plans. So we are currently using the standard plan that is active where others you can subscribe to any of the plans and use the mid journey. So this was a tutorial part for the midjourney. Now let's move forward. Now after all this a question comes that will midjourney eventually replace human artists. Let's understand. In the year 2022 a midjourney image called theater to opera ptil won the first prize in the digital art competition at the 2022 Colorado state fair. Jason Allen who wrote the prompt led midjourney to generate the image and then he printed the image onto a canvas and entered it into the competition using the name Jason am Allen via Midjourney. The two categorical judges were unaware that Midjourney used AI to generate images although they later said that even if they had known this they would have awarded Alan the top prize anyways. Another incident took place in Italy. The leading newspaper Correra Deasera published a comic created with midjourney which was written by Vani San Antoni in the August 2022. In December 2022, the creator Amarishi used Midjourney to generate a large number of images from which he chose 13 for the book titled Alice and Sparkle. This book features a young girl who built a robot that becames conscious. Now all these incidents tell us that humans and AIs are not enemies or competitors. They work together to bring a new revolution. Yeah, in some sectors we have to agree that AI can work more efficiently as compared to humans. But we also have to see that there is always a human brain behind the AI. These AI tools when blended with human creativity amplify the quality of the output. And for that all we need to do is to learn these AI tools and then put our creativity into it. And then very soon we will be witnessing the progress we make together. Studio Jibli, a name synonymous with breathtaking animation, deep storytelling, and unmatched artistic craftsmanship. But in today's era of artificial intelligence and cuttingedge technology, how is Jibli evolving? And what role does AI play in enhancing its legacy? In this video, we will dive deep into the world of Jibli's animation and explore how AI is shaping the future of anime. Now, let us understand the difference between traditional and AI enhanced Ghibli animation. Well, Studio Ghibli has always been known for its handdrawn animation style. From my neighbor Ttoro to Spirited Away, every frame is a work of art. But with the advent of AI, the way we restore, enhance, and even create animation is changing. Many classic Jibli films were created in lower resolutions. And thanks to AI powered upscaling using deep learning algorithms. The detail in these films is enhanced, edges are sharpened, and the original artistic intent is preserved, all while making it suitable for 4K and even 8K displays. AI can help

### [7:12:29](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=25949s) How AI Is Transforming Studio Ghibli-Style Animation?

improve frame rates making animation smoother without losing that signature handdrawn aesthetic. This opens the door to remastering iconic films with a modern touch. Now let us have a look at the example a comparison of AI enhanced spirited away in 4K versus the original version. Now have a look at the original version. — I'll leave the door open for you. — Courage. — It's Haku. He's hurt. Haku. — Haku. This way. — All right. Now let us discuss on AI assisted animation techniques. AIdriven animation tools like Adobe Sensei, Deep Dream and Runway ML are transforming the creative process. Even text based models like Chad GPT when integrated with image generation tools can now create stunning visuals inspired by the Jibli style just by describing a scene in words. This opens up exciting possibilities for artists to prototype ideas and bring imaginative worlds to life more quickly than ever before. Jibli's animation process has always been meticulous, often requiring thousands of hours of frame by frame drawing. But today, AI assisted tools are making animation more efficient while still preserving the signature GBS style. One such innovation that's transforming the animation process is AI line smoothing. Advanced tools can now analyze handdrawn sketches and refine the lines to make them cleaner and more consistent. Maintaining the artistic charm without compromising the flow. And the next innovation is colorization and shading. AI based coloring tools can help artists by automatically applying accurate colors and shadows. This speeds up production without sacrificing the depth and the richness that defines Jibli's iconic visuals. Next innovation is auto inbetweening. By generating in between frames, AI helps maintain the smoothness of animation while reducing manual work for animators, offering them more creative freedom. The next innovation is machine learning and Jibli's artistic style. Neural networks are now capable of recreating Jibli's aesthetic. And if you're wondering, can AI actually learn Jibli's animation style? Yes, it can. And it's happening right now. And the next innovation is AI generated concept art. Now deep learning models can generate new backgrounds and character designs that fit seamlessly into Gibli's visual universe. Next is the GANs generative adversarial networks. These AI models can replicate handdrawn brush strokes and text, bringing us closer to the world where AI generated anime can evoke the same emotional impact as traditional animation, which is AI in audio and voice acting. Jibli films are renalled for their iconic voice acting and rich sound design. But now AI is making its mark in this area as well. Next is the AI voice cloning. With AI, voice models can now replicate the performance of classic voice actors even after they are retired, allowing the original performance to live on. The next innovation that we are going to discuss is sound restoration and enhancement. Old audio tracks from Jibli films can be restored and enhanced using machine learning, bringing new life to classic soundtracks and soundsscapes. All right, let us discuss the future of Jibli and AI. So, is the AI the future of Jibli? While AI can never replace the human touch that defines Jibli's magic, it's clear that technology is playing a bigger role in animation than ever before. AI can now improve image quality, fill in animation frames, help write stories, and might even create full anime on its own. So, there is no limit to what it could do. So, what do you think? Should AI be embraced in anime production or should Jibli stick to its traditional roots? Let me know in the comments below. Let's explore six key sectors where web developers are making a significant impact. The first one is software industry. This sector thrives on cuttingedge tools and application and web developers play a vital role in building and optimizing them. The next one is healthcare. From patient portals to tele medicine platforms, web development is transforming health care by enhancing the accessibility and efficiency. The third one is e-commerce. Online shopping platforms rely on fast, interactive and user-friendly websites which web developers are essential in creating. The next one is media and entertainment. Streaming services, gaming sites and online content platform depend on dynamic web applications to provide a seamless experience. The next one we have is finance and fintech. Secure online banking, payment gateways and investment tools require a robust development making this industry a hot spot for developers. The last one we have is education and e-learning. The rapid growth of online education platforms highlight the demand of engaging, scalable and user-friendly web

### [7:17:56](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=26276s) AI in Web Development

solutions. So each of these sector is not only growing but also offering incredible opportunities for web developers. Let's explore the each role one by one. So web development offers a diverse career path each tailored into different skills and interest. Here are five major job profiles for web developers. The first one we have is a front-end developer. Front-end developers are responsible for building the visual and interactive element of the website. They ensure users have an engaging and seamless experience by working on technologies like HTML, CSS and JavaScript. The next one is back-end developer. Backend developers manage the server side of web application. They focus on database APIs and server logic to ensure the websites and applications run smoothly and securely. The next one is fullstack developer. Fullstack developers are skilled in both front end and backend development. They can handle the entire development process making them versatile and highly valuable in the industry. The next one is AI integrated web developer. These web developers specialize in building website that incorporate AI powered features such as chat bots, recommendation systems or predictive analytics to enhance the user engagement and functionality. The next one is web application security specialist. Security specialists focus on safeguarding the website and applications from cyber threats. They identify weaknesses, implement security protocols and ensure the data protection for both user and businesses. Each of these roles play a critical part in creating and maintaining the powerful user-friendly and secure web applications. Depending on your interest and skills, you can choose a path that aligns with your career path. So as you can see there are roles for everyone in web development. Whether you are into design, coding, AI or security. Start with the area that excites you the most and keep learning. Now let's talk about the earning potential of a web developers. As you can see the average salary expectations in India vary depending on your specialization. The front-end developers typically earn around five lakhs per year. The backend developers can expect to earn around seven lakhs per year. And the full stack developers can earn around eight lakhs per year. And if you have skills in AI and web security, you can earn even more. AI integrated web developers can earn around 12 lakhs per year. And web application security specialists can earn around 15 lakhs per year. So if you're looking for a career with a bright future and high earning potentials in India, web development is a great option. Now let's take a look at the salary expectations of a web developer in United States. Front-end developers in US can expect to earn around $84,000 per year. Backend developers can earn around $94,000 per year. Fullstack developers can earn around $105,000 per year. Whereas AI integrated web developers can earn around $90,000 per year. Web application security specialists can earn up to $88,000 per year. So while the web development offers a lucrative career in India, the earning potential in US is even more. So after discussing the salaries in India and USA, let's explore some emerging trends in web development that are shaping the future of this field. Starting with AI integration, artificial intelligence is revolutionizing web development. AIdriven features like chat bots, personalized recommendations, and smarter user experience are becoming essential for modern websites. Next we have is web 3. 0 the evolution of internet focusing on decentralization blockchain and user data ownership. It's all about creating more secure and transparent web. Then on we have progressive web apps which combine the best of the web and mobile apps. They are fast, reliable and can work offline providing a seamless user experience. After that we have voice search optimization which is another big trend with devices like Alexa and Siri becoming a part of everyday life. Websites need to adapt to handle voice based search efficiently. And finally we have low code or no code tools. These tools let developers and even non-developers build website and apps quickly even with minimal coding effort. So these trends are refining how we approach web development. Staying updated on them is crucial for anyone looking to thrive in this field. Now let's talk about the AI integration in web development. AI integration means using artificial intelligence technologies and tools to make web development smarter and more efficient. It helps to improve user experience, automate repetitive and complex task and even enhance the overall performance of the websites. Now let's talk about how AI is transforming the way we build our websites and apps. The first one is AI powered code suggestions. AI tools like GitHub copilot can analyze your coding contests, suggest relevant code snippets and even write entire functions significantly, boosting your development speed and accuracy. The next one is AI testing tools. AI powered testing tools can automate repetitive task, identify potential bugs in early development process and generate comprehensive test reports. This leads to higher quality software and faster time to market. The next one is intelligent chat bots. AI powered chat bots can provide 24/7 customer support, answer frequently asked questions and assist user with complex task. This improves the customer satisfaction and reduces the burden on human support agents. The next one is personalized user experience. AI can analyze the user behavior, preferences and demographic to deliver tailored experience. For instance, e-commerce websites can recommend products based on the user history and streaming services can suggest content tailored to individual taste. The next one is AIdriven search engine optimization. AI algorithm can analyze search engine trends, competitor strategies and user behavior to optimize the website content and improve the search engine rankings. This help businesses to attract more organic traffic and increase the visibility. The next one is predictive analytics. By analyzing historical data, AI can predict the future trends and patterns. This enables businesses to make informed decisions about the product development, marketing strategies, and resource allocation. So, as we can see, AI is not just the future of web development. It's already here and changing the game. Now, let's have a look at some AI tools that can be used in web development. AI tools are transforming web development by simplifying the workflows and boosting efficiency. The first AI tool that we have is Chad GPD. It is a conversational AI that acts as a versatile assistant, offering coding advices, debugging solutions, and even helping generate documentation. It's perfect for brainstorming creativity ideas for quickly understanding complex coding problems. The next one is GitHub Copilot. It serves as a AI pair programmer predicting and suggesting code. as you can type. It can help you to write entire function or classes in multiple programming languages making it a must have for developers working on the diverse stacks. The next one is Tenweb AI. It is specifically designed for WordPress automating task like website building, content creation and SEO optimization. This tool is ideal for beginners who want professional websites without needing to deep coding expertise. The next one is Vix ADI. It stands out with its ability to create websites tailored to user needs through a simple questionnaire. Especially useful for small businesses or individuals who need beautiful functional websites without much efforts. The next one we have is Galilio AI. It focuses on UI design, generating via frameworks and design components instantly from the text prompts. It's great tool for accelerating the prototyping phase making it a favorite for designers and developers alike. The next one is Apply tools. It offers AI powered visual testing to detect inconsistencies in layouts and design across the device and browsers. So each of these tools bring unique strengths enabling developers to work smarter, even save time and deliver exceptional user experience. So now that we know AI has completely transformed the web development process, offering these several key advantages that enhance the efficiency and innovation. Let's have a look at some of the advantages. The first one is content generation. AI tools can help create relevant highquality content for websites. From blog post to product description, AI ensures a faster or more engaging content creation. The next one is predictive insights. With the power of data analysis, AI can predict user behavior and trends allowing the developers to create website that align with user preferences and future demands. The next one is stronger security. AI based solution can detect and mitigate vulnerabilities in real time providing enhanced protection against these cyber attacks and ensuring the secure websites. The next one we have is better design. AI tools offer smart design process helping to create a visually appealing website without extensive manual effort. The next one is automated testing. AIdriven testing tools make it easier to identify bugs and ensure a seamless user experience by automating the repetitive testing processes. The next one is improved SEO. AI helps to optimize the website for search engine by analyzing the data and suggesting improvements for better rankings and increased visibility. The next one is better user experience. With AI, developer can create a personalized and dynamic user experience tailored to individual user preferences, making the website more interactive and engaging. The last one is faster development. Finally, AI accelerates the entire development process by automating the repetitive task allowing the developers to focus on innovation and creativity. So that's it about the advantages of using AI and web development. So now let's dive into the another exciting topic which is web 3. 0. 0 the next evolution of internet. So let's see what exactly is web 3. 0. Web 3. 0 begins with the concept of decentralized internet where the control is no longer centralized in the hands of large cooperations and servers. This decentralized model is powered by blockchain technology which ensures the secure management of data and promotes the transparency. Using AI as a backbone of web 3. 0 O enhances functionality and decision-m process. It also introduces the concept of user control data empowering the user to take large charge of their personal information deciding how and where it is used. This new structure of the internet enables three major outcomes. First, it foster the trust by ensuring the transparency and fairness in interaction. Second, it prioritizes the privacy giving the user confidence that their data is safe and secure. Lastly, it creates the immersive experience enhancing how we can interact with the digital space making them more engaging and interactive. So, as we can see, web 3. 0 is more just a technology leap. It is revolutionary shift toward an internet that is more demographic, secure and user focused. Now that we have explored what is web 3. 0, let's have a look at how it will impact web development. Web 3. 0 is said to bring revolutionary changes in the digital world. First, decentralization and trustless operations are key. With web 3. 0, data won't be controlled by a central authority. Instead, blockchain technology enables transparent and trustless interactions. The next is web 3. 0 will also integrate the emerging technologies such as AI, IoT, and AIVI creating smarter and more connected systems. The next one is enhanced data security with encryption and distributed system. User data will be safer reducing the risk of breaches and unauthorized access. The another major impact is on user experience. Web 3. 0 aims to deliver highly personalized intuitive and seamless interaction making the web more usercentric. The next one is it introduces new economic models enabling the direct transaction through cryptocurrencies, smart contracts and tokenized assets cutting out the intermediates. Finally, it promotes the border adoption and sustainability, aligning with environmentfriendly practices and creating a more inclusive digital space for everyone. In summary, web 3. 0 isn't just a upgrade. It's a paradigm shift that will refine how we interact with the web. So, that's it about web 3. 0. Now, let's shift our focus on progressive web apps. These are the modern approach to web application that contains the best of both worlds. Offering a rich features of native apps and leveraging the standards of web technologies like HTML, CSS and JavaScript. Progressive web apps stands out because they are accessible through a web browser but can be also installed on the devices like native apps. This means that the user don't have to visit the app store to download them making them more convenient and lightweight. Some of the examples of progressive web apps are Spotify, Telegram and Pinterest. So now that we understand that what are progressive web apps and their unique capabilities, let's explore why progressive web apps are considered as a future of web development. So as we move forward, businesses and developers are increasingly adapting progressive web apps. And here's why. The first reason is crossplatform compatibility. Progressive web apps are designed to work on any platform or devices whether it's a smartphone, tablet or desktop. This compatibility ensures that a single code base can cater a multiple operating system saving developer significant time and efforts. The next one is native like feature. Progressive web apps deliver a user experience a to native apps. They can send push notification, operate offline and even be installed on devices home screen all without the need for an app store. This creates a seamless and engaging experience for the users. The next one is offline functionality. Thanks to the service workers, progressive web apps can load content and maintain the basic functionality even when there is no internet connection. This makes them incredibly reliable, especially in the regions where there is unstable internet connectivity. The next one is lower development and maintenance cost. Traditional native apps often require separate development for platforms like Android and iOS. Progressive web apps eliminate this need by functioning across the devices with a single development process leading to significant costsaving. The next one is enhance security with HTTPS. Progressive web apps operate over HTTPS ensuring secure connections and protecting users from man-in-the-middle attacks. This builds the trust with the users and align with the modern web security standards. And the next reason is improved performance. Progressive web apps leverage the efficient caching and load optimization techniques resulting in faster page loads and smoother performance. This improves the user retention and minimizing the frustration caused by the delays. With all these benefits, there is no surprise that progressive web apps are revolutionizing the web development. They combine the accessibility of web application with the immersive experience of native app making them a powerful solution for the modern digital world. Now let's move on to the another emerging technology which is voice search optimization. Voice search optimization is the process of improving your website or digital content to ensure it ranks well in voice based search queries. These queries are performed through the devices like smartphones, smart speakers and voice assistants including Google Assistant, Amazon Alexa and Siri. As voice searches become more popular, optimizing your content for these platforms is critical for staying ahead in the digital space. Now let's talk about how voice search optimization is shaping the future of web development. The first point is integration with the IoT devices. With smartphones and connected devices becoming more common, optimizing voice interaction across these devices is essential. The next we have is enhanced page load speed. Fast loading pages improves the user experience and are especially important for voice searches as user expect quick answers. The next one is conversational user interfaces. So voice searches rely on natural conversational language. So designing the user interfaces that cater to this is a key. Then there is localized SEO optimization. Voice searches often have a local focus like cafes near me. So tailoring the content local searches result is important. So next one is mobile first design priority. Since most of the voice searches happen on the mobile devices, ensuring your site is mobile friendly is critical. The next point is focus on the longtail keywords. These are more specific and match the way people naturally ask questions in the voice search. The next reason that we have is structured data techniques. Using structured data help search engine better understand your content and provide the accurate answer. And finally, we have advanced integration with natural language processing or NLP. This ensures website can interact and respond to the way people naturally speak during the voice searches. So these eight factors are transforming how we build and optimize websites for the future. Voice search optimization is no longer an option, it's a necessity. So now that we have explored how AI is revolutionizing the way we build websites and apps, now let's shift our focus to another powerful tool that is no code and no code platforms. Low platforms are game changer in the world of software development. These platform allows the user to create application without writing extensive code often using visual interaction and drag and drop component. This modifies the app development enabling businesses to build solution faster and more effectively. Some of the popular low code no code platforms include Zoho Creator a versatile platform for building custom business app. The next one is Mendex, a comprehensive low code platform for enterprises application. The next one is APN a platform for building process-driven application. Now that we have explored the basics of no code and low code platform, let's dive into why they are future of web development. So why low code and no code are the future of web development. So the first reason that we have is accelerated development. These platforms significantly reduce the development time by automating many of the complex coding task. AI powered tools can generate code snippets, design layouts and test applications allowing the developers to focus on their core functionalities. The next reason is increased accessibility. Low code no code platforms empower developers with limited coding experience to create applications. This modifies the app development and fosters the innovation. The next one is enhanced cost efficiency by accelerating development and reducing the need for specialized developers. Low code no code platforms can significantly lower the development cost. The next one is integration capabilities. These platforms offer integrate with variety of other tools and services including APIs, databases and cloud platforms. AI powered integration tools can streamline the process of connecting the different systems. The next one is scalability and adaptability. These platforms often come with built-in scalability features making it easier to scale the application as the user base grows. AI powered tools can help to identify the performance bottleneck and suggest optimization. The next reason is focus on innovation by automating routine task. Low code no code platform free up the developers to focus on innovation and creativity. This leads to the development of more innovative and impactful applications. So in conclusion, no code and low code platforms are reshaping the future of web development by leveraging the power of AI. These platforms are making it easier for anyone to build the powerful application regardless of their technical expertise. So now let's put our theory into practice. We will build a simple game by using HTML, CSS and JavaScript with a little help of artificial intelligence. So now what we are going to do is use AI to build an app. So even though we are going to use open AI to create a app but still you need to have a basic knowledge of HTML CSS and JavaScript to further make any changes in that application. So right here we can see the interface of chart GPT which is an open AI. So now we are going to take help of chart GPT to build a flappy bird application. So we are just going to write a prompt here. So that's it. You just need to give a prompt to charge GPT and it will start generating the code. So now we are just going to wait while the charge GPT is generating the code. So right here we can see the HTML code has been generated, the CSS code has been generated and then it is going to generate the JavaScript code. So to use AI in the process of web development, all you need to do is write a prompt to open AI or any kind of AI that you are going to use. But you still need to have a basic knowledge of the technologies so that it is easy for you to do the alteration in the web application. So right here I can see that J GPT has provided me the HTML code, the CSS code and the JavaScript code. So again what I'm going to do is I'm going to ask the charge GPT to give me the HTML CSS and JavaScript in the same file. That means I'm asking for inline scripting and inline CSS. That's it. Now it is giving me the inline CSS and inline scripting. So right here we can see it has generated the code. All we need to do is we need to click here copy code and then we are going to move on to our VS code. Here we are just going to paste this code. That's it. Any changes if you want to make into this file you can make it. Here you can see the head section, the inline scripting and the inline CSS. After saving the code, we just need to click on go live. So that's it. Here is the output. Now we are going to press the space button and right here I can see that my game has been started. So that's it. So all of these functionalities has been created by chart GPT itself. So this is how you can use AI to develop anything. It is just a game but you can develop anything by using charge GPT. But along with that you also need to have knowledge of the technologies that you are going to use. So that will help you to make any changes in your application. So that's it for this video. In this video we have seen how AI is transforming web development by enhancing user experience, streamlining processes and opening doors to innovative solution across the industries. From smarter interfaces to personalized content delivery, the integration of AI is shaping the future of how we build and interact with the website. Imagine you're scrolling through your favorite social media platform, catching up on the latest updates from friends and family. So suddenly you come across an ad that it's for the product you have been thinking about buying for weeks. But how did they know? That's the magic of AI in marketing. Take a clothing brand for example. They use AI algorithm to analyze your activity on the platform. What post you like, what pages you follow, and even what comment you leave. So with this information, they can target ads specifically to the people who are likely to be interested in their products. But AI doesn't stop there. It also helps businesses optimize their ad campaign in real time. So if an ad isn't performing welli can automatically adjust its targeting, messaging or creative elements to improve its effectiveness. So this means you're more likely to see ads that are relevant and interesting to you. So the next time you see a sponsored post that feels like it was just made for you, remember it's not luck, it's AI in action, making social media marketing

### [7:43:07](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=27787s) AI in Healthcare

smarter, more personalized, and more effective than ever before. Now let's take a look at how Netflix uses marketing strategies. Do you know how Netflix suggests shows or movies you might like when you open it? That's all because of AI. It looks at what you have watched before, what you have liked, and what other people with similar taste enjoyed, and then suggest stuff it thinks you will enjoy. But it's not just about suggesting shows. Netflix also uses AI to decide how to promote its new releases. It looks at things like who is watching what when they're watching it and what's getting talked about on social media to figure out how to get the word out about new shows or movies. Plus, Netflix also uses AI to send out emails or notifications. That marketing is all about you using smart technologies to understand what people like and then giving them exactly that. Refers to the use of artificial intelligence techniques and technologies to improve various aspect of marketing strategies and operations. This includes alithm to data automate repetitive task what's not. So by looking at data on things like website visit, social media clicks and online purchases, AI can give companies insight into what their customer like and how they behave. And it's not just a big companies that can benefit from AI in marketing. Even small businesses can use AI to reach new customers and keep them coming back. So whether it's sending personalized emails or targeting ads to specific group of people, AI levels the playing field and help businesses of all sizes succeed in the digital world. So when you see ads that seems to know exactly what you're interested in or get emails that feel like they were just made for you, chances are AI is behind it all, working its magic to make marketing more personal, more effective, and more engaging. Now that we have an understanding of AI in marketing. So now let's explore some of the ways it can be utilized in marketing. Companies like Amazon and Netflix uses AI algorithm to analyze user behavior and preferences. Then they recommend products or content personalized to each individual's interest. So this increases engagement and drive sales by showing customers exactly what they are interested in. And next, many businesses use AI powered chatboards on their website or social media channels to provide instant customer support. This chatboards can answer common question, assist with purchases and even book appointments, improve customer satisfaction and freeing up human agents for more complex task. AI enables marketers to predict future trends and customer behavior by analyzing large data set. Retailers can use predictive analytics to forecast demand for certain products, allowing them to optimize inventory levels and plan marketing campaigns accordingly. And then AI tools can analyze the user behavior and email engagement metrics to optimize email marketing campaigns by segmenting audiences, personalizing content, and determining the best send times. AI helps marketers improve open rates, click-through rates, and conversions. Then AI powered visual search technology allows users to search for products using image rather than text. Companies like Pinterest and Google use AI algorithm to analyze images and return visual similar results helping user discover products they are interested in. And also AI can generate content at scale reducing the time and resources required for content creation. For example, AI powered tools can generate blog post outlines, product description or social media caption based on user input. Now that we have covered how AI can be utilized in marketing. So moving ahead, let's consider the advantages it brings to marketing strategies. AI for marketing makes things easier and better for businesses. It helps them understand customers more and create ads and messages that fit each person perfectly. So this means more people are likely to buy stuff. AI also save time by doing boring tasks like looking at data and figuring out what works best and it helps target the right people with ads so businesses don't waste money on the wrong audience. Plus it gives quick feedbacks on how ads are doing so marketers can change things up fast if needed. So in short, AI makes marketing smarter, faster, and more effective, giving businesses an edge over their competition. Next up, let's have a look at the best AI tools for marketing. Chad GPT is developed by OpenAI is a generative pre-trained transformer capable of understanding and generating humanlike text. So it is one of the best tools for various application in natural language processing. So chip is everything these days, right? So people from different industries and background use it for all sort of things. For example, businesses use it to help out with customer support to find out new leads and to do market research. Also, writers, bloggers and content creators use it to come up with an interesting ideas and make their content super engaging. And developers and technologists even use charge to build AI powered application, chat boards and tools. So basically charge is used across many more domains for different purposes. So it is one of the most commonly used tools. So one of the best thing about chipity is that it gives free access to AI content development. So to use chipity simply open the interface and input a prompt or a question and it can be about anything. The next AI tool for marketing is Grammarly. So for marketers, Grammarly is a great tool for improving product description. We know how important it is to write clear SEOfriendly descriptions that stand out from the competition, right? So, Grammarly makes sure your writing is error-free and refined and leave a positive impression every time. Next, it simplify complex and difficult to read sentences. It enables your team to maintain a consistent writing site and also Grammarly employs generative AI to aid you in writing, rewriting, brainstorming and responding with an easy prompts or with just a click. So with Grammarly plan, you receive a monthly allowance of a prompt to start writing with generative AI assistance and unlocking your potential. Grammarly provides tools such as AI writing tools, grammar checker, plagiarism checker and paraphrasing tool. And the next tool is surfer SEO. A surfer provides you with an SEO strategy which means it can help you plan to get more people to visit your website from search engines, make sure more people can see your site and improve where your site shows up in search result. It provides tools such as uh surfer AI to generate optimized articles that rank with just a click and then keyword research to create SEO ready content strategy in minute. Also the content editor to rank smarter with guidelines and handy content suggestions. And this tool is useful for agencies, for teams, for freelancers and even for affiliates. The next AI tool for marketing is Dali 2. DAL 2 is an AI system from OpenAI that uses deep learning methodologies to produce realistic images and arts just by describing them in natural language. It can be used for various marketing purpose. DI to learns from examples. So you can describe what you want in natural language and get different images and art. You can also make changes to existing pictures using inping tool and replace a part of an image with AI generated imaging. Delu is used in creative content generation. So it is often used by artists, designers, marketers, educators and developers who need assistance with generating visual content, creating art, designing graphics and exploring creative possibility using AI technology. Next up we have Jasper. ai. Jasper. ai AI helps in marketing by creating personalized content. It analyzes customer data to understand preferences and suggests subject lines, body text, and calls to action based on past performance. And Jasper. ai offers essential features such as campaign management, brand voice development, chat functionality, art generation, security measures, and multilingual support. The use cases for Jasper. AI AI includes blog writing for better blogs with AI writing, editing and optimization. And then the copywriting, convincing copy crafted by AI and then

### [7:51:49](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=28309s) AI in Retail

SEO for more traffic and conversions with AI content. And then the content strategy that improves the strategy with AI content tools. And then we have social media marketing and email marketing to boost engagement with AI emails. The next AI tool on our list is Midjourney AI. So, Midjourney is a cool AI tool that blends art and tech. It allows you to describe what you want in words and it turns those description into stunning pictures. So, using advanced algorithms, it turns words into unique artworks. Midjourney uses machine learning algorithms to interpret and analyze the data which can be in the form of text, audio or image. So after getting this data the algorithm will use it to generate new images, sounds or other media type based on the patterns found in data. The next tool for marketing is B. B is a generative AI chat board developed by Google which is now called Gemini and it is designed for creative writing task and is built on Lambda a transformer based model. B can help you do tasks such as coding and solving Mac problems as well as write, plan and learn more. You can also generate images through Gemini, recognize speech in over 100 languages and help with audio translation. So without the need for any extra tools, Gemini can understand tricky visuals like charts and diagram all by itself. It is also the best tool for describing images and answering questions about them. So to get access to Gemini, you need to sign into your Google account and start chatting with Gemini. So you can provide prompts with various inputs such as text, images, videos or even code. You will also have the option to integrate with Google apps uh such as Gmail, Docs, Sheets, and more. Hey there my friends, are you ready to embark on a journey into the future of retail? If that's a yes, get ready because we are about to explore that fascinating world of artificial intelligence and how that's transforming the way we shop. In this video, we'll dive deep into the world of AI in retail where personalized recommendation, intelligent chatbot, and virtual tryons help you choose the best. Get ready to be amazed as we unveil the cutting edge technology that are revolutionizing the shopping experience as you know it. Imagine you have a personal shopper who knows your style, preference, and budget like the back of their head. Well, that's exactly what AI is bringing to this table. With the help of machine learning algorithms, retailer can offer personalized recommendation tailored to your unique taste. Say goodbye to the aimless browsing to endless racks of clothes or shifting through the countless online product pages. AI powered personal shopper can analyze your past purchase, browsing history, and even the social media activity to curate a selection of item that are sure to catch your eye. But this doesn't stop there. These AI assistants can now offer styling advice, suggesting outfit, accessory that complement your existing auto. It's like you have a fashion guru that you always wanted and now you got it. First thing first, let's define what we mean by AI in retail. AI in retail refers to the use of data automation and algorithms to deliver customized shopping experience to the customers. It's like you have a personal shopper who knows your preference inside and out. AI retail caters to both online and physical stores leveraging data from various sources like mobile devices and even the sensor. The secret sauce that helps retailer and provide a tailored experience. Now let's dive into the key areas with the retail game. First is personalization, inventory management, price optimization, chat bots and virtual assistant and even visual search. Imagine walking into a store where AI system knows your preference and offer personalized recommendation just for it's like having your own personal stylist but without the hefty price tag that comes along. Say goodbye to the empty shelf and overstocked item. AI can help revenantry level and also reduce waste by analyzing sales pattern and customer behavior. AI algorithm can set optimal prices for product in real time taking into account market trend, competitor and the other factors. No more guessing game just the smart pricing strategies. Do you need help finding the perfect outfit or even have a question about a product? No problem. EI power chatbot and virtual assistants are available 24/7 to provide a quick and efficient assistance. Forget about those long product description which you even don't want to deal with. With AI based visual search technology, you can simply upload an image and the system will find similar products based on color, shape, and patterns. It's like having a personal detective who can find anything that you want. But wait, there's a lot more to come. Let's take a look at some real world example of AI in multiple companies like Amazon H& M technology sensors and AI to create a cashless shopping experience. Just grab your item and go. No more waiting in lines. Macdonald acquired Dynamic Yield, an AI based personalization company to automate drive-thru menu and provide a real-time menu recommendation based on consumer trend weather conditions. H& M employs over 200 data scientists to predict and analyze fashion trends using AI algorithms. These insights help them make decision on purchasing, inventory management and product placement across their stores. AI isn't just transforming customers, also revolutionize supply chain by improving retail value, chain planning and reduction cost for third party partners and ensuring faster deliveries. By leveraging AI, retailer can optimize inventory management, reduce unsold and out of stock scenarios, and even adapt to unforeseen events with agility. It's like having a crystal ball that predicts and solves supply chain challenges before they even happen. Now, let's take a look at some mind-blowing statistics. Global spending

### [7:57:57](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=28677s) AI in Automotive

on AI in retail supply chain is expected to reach a whooping $10 billion by 2025, growing annually by 45%. According to a study by Capgeemini, the application of AI in retail could save up to $340 billion per year with 80% of these saving coming from improved backend operation and supply chain management. In next few years, it is estimated that up to staggering 325,000 retailers will be using AI and machine learning in some form. As you can see, the future of retail is undoubtedly AI powered. The retailer who embraced this technology will gain a competitive edge by providing exceptional customer experiences, optimizing operations and driving growth. There's one more crucial area we need to address. Upskilling for AI revolution. As AI continues to transform the retail landscape, it's essential for professional to stay ahead of the curve. There are multiple ways you can upskill. First being the online courses platform like for various courses on learning and data analysis often with career specific track for retail. Consider intensive boot camp with focus on AI and retail to gain practical skill quickly. Earning certification in AI or retail analytics can demonstrate your expertise to potential employers. Engaging with online communities and forums focused on AI in retail to learn from other and share your knowledge. Friends, we have reached the end of adventure, but the journey is just the beginning. Future of shopping is here and it's powered by artificial intelligence. By embracing AI and upskilling workforce, retailer can create exceptional customer experiences, optimizing operations and drive growth. As consumers, we can look forward to more personalized, efficient, and delightful shop both online and in stores. Let's swipe right on the future of shopping and embark on this exciting journey together. Traditionally, when you banks mainly load to decide if you're eligible and what interest rate you will get. But here's where AI step into shake things up. With AI, banks can now consider a whole bunch of other factors beyond just your credit score. Like for example, they can look at your job history and how often you or even your social media activity. This means they can get much clearer picture of your financial situation and whether you're likely to pay back the loan to the bank. So now here's where it gets interesting because EI can process all this data so banks can make decision much quicker than before. So instead of waiting days or even weeks to find out if you're approved for a loan, you might get an answer in just a few minutes. But is it only about speed? No. Right? By using AI, banks can also be more flexible and even-handed to whom they lend money to. For example, someone who doesn't have a perfect credit score, but has a steady job and pays their bill on time might now have a better chance of getting a loan. So with AI, borrowing money has become faster and more accessible for everyone, making finance simpler and more inclusive. So what exactly is AI in finance? AI in finance refers to the applications of artificial intelligence technologies such as machine learning, natural language processing and data analytics to various aspects of financial industries. And this includes task like risk assessment, fraud deduction, trading, customer services, and investment management. By doing these jobs faster and smarter, AI helps banks and other financial companies make better decision and work more efficiently. In short, AI in finance aims to improve decisions, automate task, and give customers personalized help. All of which lead to smoother financial system. And now let's get a deeper understanding of how AI is used in finance. AI is really shaking things up in finance. It's making banking task smoother and finding valuable insights in all the data which is changing how and where investment happens. For example, in algorithmic trading using machine learning, investment firms can analyze historical market data to inform trading decisions in algorithmic trading strategies. And imagine AI powered robots analyzing lots of information about the stock market really quickly. They look for the patterns and trends like when to buy or sell the stocks and this helps investment firms make past decisions and make more money in fraud detection. So using machine learning models learn from past fraud cases to detect similar patterns and predict potential fraud in real time. So here AI acts like a smart security guard for banks. So it watches all the transactions and looks for anything strange like if someone suddenly spends a lot of money in different country and if it sees something fishy it alerts the bank to check it out and stop fraud and then customer service. Banks use chat bots for customer services utilizing natural language processing technology and think of AI chatbots as helpful assistants on a website or app. They are like virtual helpers that can answer questions, solve problems, and even help with things like paying bills. So they use special technologies to understand and talk to customers just like a real person would. And then credit scoring. So neural networks using deep learning techniques analyze complex relationships between different factors to create more accurate credit scores. So when you apply for a loan, AI looks at your financial history to see how responsible you have been with the money in the past. It checks things like if you have paid your bill on time, how much depth you have, and if you had any trouble with loans before. And based on this information, AI gives you a score that helps lenders decide if they should give you a loan and what interest rate to offer you. So if you have a good credit score, you are more likely to get approved for a loan with a better terms like lower interest rate. Next, application processing. So using machine learning, models study old applications data to make decisions automatically and make the process of applying for things like loans faster and smoother. So when you apply for a loan, AI helps speed up the process by handing lots of paperwork. So it reads through documents like bank statement, pay leaves and identification cards using a special technology called natural language processing. And this allows AI to quickly extract important informations like your income or address without needing a person to go through each document manually. By automating task like data entry and verification, AI makes the loan application process faster and more efficient for both applicants and lenders. and then cost reduction. So with predictive analytics, AI algorithms analyze historical data to forecast future trends and optimize resource allocation, helping financial institution minimize cost and improve efficiency. So AI helps save money by using robots to do task that would normally require people. So this includes handling data, helping customers and managing risk. By doing this task more efficiently, banks can save money in long term. And I hope it's clear now how AI is used in finance. So now let's explore how some of the financial IT companies are leveraging AI in finance to innovate and improve their services. So the three AI powered platform utilized by Fender companies in India are digit AI. This digit app AI helps people who are new to credit or have lower credit scores by using smart technologies to assess if they can get loans or not. Next, Sinaptic. Sinaptic makes financial things easier and faster by using smart computer programs to give helpful advice and predict what might happen in the future. And then we have Zagle. Zangle helps businesses keep track of their money and prevent cheating by using clever computer programs to spot problems and give good advices. And then some of the financial IT companies are leveraging AI in finance to innovate and improve their services. And one among them is IBM. IBM uses AI to help financial organizations manage risk better by analyzing complicated patterns in transaction data. This includes keeping things secure, preventing fraud, following rules about money laundering, uh knowing who their customers are and making sure they are following all the rules. Next, Oracle. So, Oracle using AI technology enables ERP systems to scan paper invoices automatically. The system then enters important details like the supplier name, items purchased, and cost. This helps detect fraud, manage accounts and speed up approval process. And then we have Microsoft. Microsoft has developed a new AI chatboard called Copilot designed specifically for finance workers using Excel and Outlook. This copilot for finance can perform some common tasks tailored to their roles right within these tools. And even leading financial institutions such as PTM, HDFC Bank and Baj Finance leveraging AI. So let's have a look at it. PTM, a top digital payment platform in India, uses smart AI to stop fraud and keep transactions safe. It also uses AI to improve customer service and give personalized advice. And next we have HDFC Bank. HDFC Bank uses AI for lots of things like helping customers with chatbots, predicting risk, and making smart investment decisions. They also uses AI to make getting loans easier and

### [8:07:52](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=29272s) AI for Marketing

faster. And then we have Bajage Finance. Badajach Finance uses AI algorithms to quickly decide if someone can get a loan and how risky it might be. So they also use AI chatbots to help customers and give special offers to keep them happy. And many other companies are using AI in finance to come up with a new ways of doing things, make customer happier and make their businesses run smoother. And the tools used in AI for finance are Chad GPT, ClickUp, and Book A. AI. Let's start with understanding who HR professionals are. They are the backbone of any organization responsible for attracting, retaining, and developing the most valuable asset. Exactly, you guessed it correct, human capital. HR plays an important role in ensuring an organization's success. The role of HR professionals are vast and varied. They're responsible for recruiting and onboarding. They oversee employee relation. HR also identify and help nurturing the high potential employee for future leadership roles. HR professionals play a crucial role in performance management. They ensure a positive and inclusive work environment. the organization comply and we get the benefits that we deserve. There are four key impact the experience the recruitment learning and development and we'll start with experience analyzed data and behavior to personalize experience assistance can provide a quick support and answer to the common queries. The robotic process automation can streamline routine task like documentation and approvals in recruitment. AI can help source analyzing rum and social media profiles. AI power chatbot can and answer their queries all day long. Basment leveraging AI can evaluate soft skill and job fits in learning and development. AI can create learning path based on the skill role and career goals. The virtual reality and augmented reality can provide immersive training experience. AI tutor can adapt teaching method to individual learning styles. In the people analytics, AI can analyze data from multiple sources to identifying trends and patterns. The predictive analytics can forecast risk and recommend retention strategies. AI can help optimize workforce planning and model different scenarios. Now, let's explore how AI can be practically implemented in various HR processes. First, take up recruitment. Say goodbye to stack of rumé. EI can screen and rank candidates based on requirements, saving countless hours. In onboarding, AI can personalize onboarding programs based on a new hire role, background, and learning experiences. In performance management, AI can analyze performance data and provide a personalized feedback and development recommendation. The AI power chatbot can answer common HR queries freeing up the HR professional time. Hence the employee management. Now let's see how AI is helping HRs. No more stack of rum. AI can search through all the application and find the suitable candidate for the job profile. Datadriven decision. One of the benefits of AI in HR is its ability to provide datadriven insight. By analyzing vast amount of data from various sources, AI can help HR professionals make informed decision about workforce planning, talent management and employee management strategies. Ultimately, AI in HR is about creating ultimate employee experience. From personalized recruitment and onboarding to tailored learning and development opportunities, AI can revolutionalize the way we attract, retain, and engage top talent. Strategizing HR with AI. We can generate strategic ideas with the help of AI. Lay the foundation for the proposal and make datadriven decisions. Let's take a look at this conversation with Charypity, an AI that can assist in generating ideas and make datadriven decisions. As you can see in the image, the recruiter is looking to create an ideal applicant profile. Let's take a look at this conversation with Charge, an AI that can assist in generating ideas and making datadriven decisions. As you can see in the image, the recruiter is looking to create an ideal applicant profile for a technical consent role and charge GPT is ready to help analyze the provided details about the position, business, and the team to critically assess and generate their profile. Just like this example of using charge GPD to understand a reason for employee turnover. The recruiter can leverage the tools language understanding capabilities to gain insight from the providing information. Charge GPT can analyze and job requirement business context and team dynamic to suggest the ideal qualification experience and attribute needed for technical consultant role. This demonstrate how AI tool like Char GPD can be leveraged for strategic HR task like building job profiles, screening candidates or even identifying skill gaps within the team. By providing relevant data prompts, HR professional can harness the power of AI to generate valuable insight and make informed datadriven decisions throughout the employee life cycle. The future of HR is with AI. Now you might be thinking this all sounds great but are HR professionals ready for AI revolution. Well according to the recent survey only 46% of the HR professionals felt comfortable with the widespread use of AI in next 5 years and many lack the necessary knowledge about the AI. But fear not we are helped to navigate you this exciting new frontier. To prepare for the future of work HR professionals need to upskill themsel in six key areas. Understanding how to use data to drive decisions. Creating agile organization that adapt to change. Then implementing the right technology and designing usercentric HR solutions. Consulting and influencing. Gaining buyin from senior leaders and stakeholder management. Build strong relation across the organization. By mastering these skills, you'll be equipped to embrace the AI revolution and position yourself as strategic partner within your organization. Friends, we stand at a new front of transformative era in HR. AI is no longer a futuristic concept. It's a reality that is shaping the way we work, learn, and grow. By embracing AI and upskilling the workforce, HR professionals can become a change agent. Driving positive impact from the organization. Together, we can harness the power of technology to create a more efficient, personalized, and datadriven HR experience. The birth of artificial intelligence in healthcare was powerful because it has changed how diseases are identified, treated and monitored. AI is improving health research and results a lot of making more correct diagnosis and enabling treatments in a particular way each patient is in a quick time. AI can look over and analyze huge number of clinical records enabling doctors recognize sickness flags and even the patterns otherwise that has gone unnoticed. The potential applications of AI and healthcare are broad and farreaching. From scanning radiological images for ear detection to predicting outcomes for electronical health records by leveraging artificial intelligence in hospital settings and clinics, healthcare systems can become faster, smarter, and even more efficient in providing care to million of people worldwide. Artificial intelligence in healthcare is truly turning out to be the future of transforming how patient receives quality care while mitigating the cost and also providers more health and outcomes. Let's take a look at a few of different types of artificial intelligence and healthcare industry benefits that can be derived from their use. Firstly, let's learn about the machine learning. Machine learning is transforming healthcare where images of AI analyzing scans of disease and predicting healthcare risks. ML can do this by shifting even their vast amount of data, spotting patterns and even the human high. This translates to better diagnosis, personalized treatments, plans and even faster drug discovery. Beyond that, ML automates task, frees up doctors and improve patient experience with AI assistance. While ethical concerns exist, the potential for AI is improved by reducing tasks, reducing the errors and groundbreaking research, making ML a revolution to the healthcare. Next, let's learn about natural language processing. Stuck in doctor's notes, natural language processing unlocks the secrets. This tech wizards translate medical jargon into the machine code, squeezing insights from progress reports and patient records. Imagine doctors automatically seeing the patients full health picture, reducing errors in the code and even the chatbots answering your questions. NPL is revolutionizing healthcare paving the way from personalized medical concerns and even researches better communication between the patient and the doctors too. The future of healthcare is speaking a language and NPL is translating the path for the better healthcare. Next let's learn about the rule-based expert where forget fancy AI but it also have systems that have the workh horses of healthcare AI. Imagine a digital doctor ples fever, cold, shortness of breathe. Consider it has a big disease. These systems analyze patient even having its data and suggest diagnosis or treatments. Helping standardize care and even frees up doctor's time. While this flexible repeat while not as a flexible as a newer AI we are even simpler to use, work well resources limited settings. The downside of this would be they have a struggle with complex cases and can't even learn by their own. However, rule-based systems are building blocks of the future healthcare. But as a paving way for even more powerful tools to assist medical professionals too. Have you ever thought when did AI become popular in healthcare? The surge in the popularity of healthcare? AI marks a transformative era in the medical field. This phenomenon gaining momentum over the past decade has even seen a role in AI in healthcare emerge as a cornerstone in innovation and even efficiency in the medical practices worldwide. Understanding when and how AI can become integral requires exploring its applications, benefits and even the groundbreaking examples of healthcare of AI. AI in medical field began in the gain sustainable attention in even early stages of 21st century with significant advancement in technology and data analysis. This period saw a convergence in increased computational power and even having the availability in the large data sets. The significant improvement of AI powered medical algorithms. The real turning point however came with a realization of how AI could even address some of the most pressing challenges in healthcare ranging it from diagnostic recurrency and even having the personalized treatment and operational efficiency. Have you ever thought how has AI impacted the healthcare industry? EI in healthcare offers the ability to progress and have its analyze vast amount of data medical data from beyond human capacity. This capability was instrumental in diagnosing disease, predicting outcomes and recommending treatments. For instance, AI algorithm can analyze medical machines such as X-ray and MRI with greater accuracy and speeding up human radiologists often detecting disease such as cancer at an earlier stage. Example of artificial intelligence in the healthcare are diverse and impactful. A significant development besides IBM Watson in health was Google's deep mind health project which demonstrated the ability to diagnose health disease from retinal scans with a level of accuracy comparable to human experts. These piring projects showcase that AI's potential to revolutionize diagnosis and even personalized medicines. The question of how AI is used in healthcare extends beyond diagnosis. AI applications are also reshaping because patient care management and drug discovery and healthcare administrations have become larger in patient care. They have a lot of chatbots that have been driven into virtual health assistance and also bringing up its providance of 247 support and monitoring too. Otherance to which the treatment of plans in drug discovery AI accelerates the drug deployment process in predicting how different types of drug will react on the body significantly reducing the time and the cost of clinical trials. Another area where AI is used in the healthcare has made a

### [8:20:28](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=30028s) AI in Finance

significant impact in predictive analytics. Healthcare of AI systems can analyze patterns in which the medical patients of the history current health and even predicting potential health too can have its risks. This predictive capability enables healthcare providers to offer proactive and preventative care ultimately leading to its better patient outcomes and reduced healthcare cost. AI streamlines various processes within in which healthcare facilities can provide scheduling appointments to progressive insurance claims. AI automation reduce administrative burdens allowing healthcare providers to focus more on patient care. This not only provides operational efficiency but also enhances overall patient experience too. The rise of AI in healthcare has been a gradual and steady journey catalyzed to technology advancements and even increasing demand for improving healthcare delivery. The integration of AI into the medical field has bought up a very big paradigm shift making healthcare more efficient, accurate and personalized. As AI technology continues to evolve, this role of healthcare is to set to become even more significant. Further solidifying its status as an indispensable tool in modern medicine. This journey of AI from a noble concept to a fundamental aspect of healthcare simplifies technological revolution with the promise of better healthcare outcomes for all. Let's see the challenges that are faced in the healthcare industry. We recognize that there are significant challenges related to the wider adoption and deployment of AI into healthcare systems. These challenges include but are not limited to data quality and access technical infrastructure, organizational capacity and even ethical and responsible practices in additional to the aspects related to the safety and regulations. Some of these issues have been covered but others go beyond the scope of the current article. AI transforms the future of AI is transforming diagnosis and delivery across multiple stages in healthcare value chain. AI algorithms can precisely analyze medicales to aid healthcare professionals in identifying the conditions. AI powered tools oride histo analysis to enhance diagnostic accuracy. Additionally, AI models leverage individual data and even historical records to predict disease outbreaks and create personalized treatments plans based on the genetic information, clinical history that have been bought through the lifestyle factors. Tele medicine platforms powered by AI to offer its consultations while variable devices and sensors facilitate realtime patient health monitoring and early intervention. AIdriven solutions also optimize administrative functions and help boost patient engagement through personalized recommendation and educational concerns. AI additionally enhances surgical procedures boards designed to assist surgeons in stabilizing the movement of providing real-time feedback. Furthermore, AI analyzes extensive data sets to uncover valuable insights for drug discovery, healthcare resource allocation and also policy deployment. AI and manufacturing is the perfect match for the future because in many manufacturing they manage to products with gen of data and it's cost consuming rather than that we can feed all the datas into AI algorithm and thereby we can teach them and improve by helping them to learn how to identify the problems and process optimization. But still everyone has a question. Will AI replace manpower? Don't worry, we will discuss this question in this video as well. With that being said, hello everyone and welcome back to another interesting video on AI and manufacturing by Edurea. AI and manufacturing sector is transforming production processes particularly in the automotive sector. Traditionally products were made using linear production system but this method needs to be updated for modern needs. Let me give one of the best example for AI and manufacturing which will be robots. You need two weeks for highly paid engineers to monitor the robots to do what they are assigned to do. But in the future we will see robots doing things by themsel. They will learn how to work with the products and the materials. They will compare their work, improve it and learn more from it. Moreover, there will be only one engine. All the robots will learn things by then the AI. Now we will move on to the key AI segments in manufacturing. And the first one will be predictive maintenance. Predictive maintenance involves using AI and machine learning to identify and predict which equipment or machine is likely to fail. One of the key concept of predictive maintenance will be digital is nothing but a real physical asset. Likewise, a mission. This will allow the AI and manufacturing industry to analyze the data and predict when something may go wrong. Next, we have quality control and inspection of the AI powered computer visions which is mostly used in real time quality control and conducting an inspection of the products on the manufacturing line. Next we have supply chain management which plays a very crucial role in optimizing the supply chain by analyzing the data related to the demand forecasting, investment management and logistics. For example, an automotive parts manufacturer can use machine learning modules to predict how many spare parts will be required for the future. This will allow them to keep an appropriate amount of investors on hand lowering the cost and are available when needed. And lastly, cobotss also known as they are very essential in AIdriven manufacturing to increase the productivity by collaborating with the cos are used in fulfillment centers to help pick up and pack. The coorts work with human workers guiding them in complex areas to identify the objects which will help in AI system. Next we have why does AI and manufacturing matters so much. Let's see why and the first reason will be gener AI algorithm can explore a vast number of design possibilities by just inputting design goals and constraints. This process is known as generative design enabling the creation of optimized products that will use only few materials to meet the performance criteria efficiently. Let's take for instance AI can suggest lightweight yet strong materials reducing waste and the cost of raw materials. AI can be used for better and optimized designing process. One of the example will be advanced testing. When a vehicle or a product is designed using advanced AI stimulation, you can perform accident testing in the stimulation rather than testing on real prototypes which will save both money and time. Next, we'll move on to the overview of AI role in manufacturing. The first one is energy efficiency. The AI algorithm will monitor and control the energy consumption within the manufacturing facilities. making realtime adjustments to optimize the energy usage. In this way, you can reduce energy wastage and increase overall production. Then we have customization. Massive customization of products by tailoring the manufacturing processes according to the customer Saturn preference and producing personalization. Then we have choosing the right program. Clearly outline what you want to accomplish with the program and do a thorough research about which program and what type of software which then we have cost consumption. AI helps save money and increase profit by improving the of various parts of the manufacturing process. This rapidly improves product design, increasing productivity through AI powered machines. Now we are on to a question. AI can overtake only some of the monotonous and dangerous jobs, but it will only

### [8:29:04](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=30544s) AI for HR

partially replace human workers. Instead, AI can work with people, helping them with their routine task and allowing them to focus on more complex and creative jobs. This partnership can create a new job opportunity that requires human skills like creativity and decision making which AI cannot do. So while AI will change the way we work, it wouldn't eliminate the need of human workers. And one of the best example for this is Tesla's gigaf factory where AI powered robots build the car bodies. At the same time, human engineers program in the robots, troubleshoot problems, and ensure that the final product meets quality standard. This collaboration between the AI and human workers leads for more efficient production, highquality vehicles and safe workplace. In today's world, automotive industry is advancing rapidly with the integration of AI technologies. Apart from the fact that AI is making one's life easier, it helps make human transportation safer and more efficient in all the aspects. So what do you mean by AI? refers to intelligence technology in the design operation and maint like self-driving parking or even it's giving cars brain to humans. We all have come across how the automotive industry has undergone major changes over the past few years and here AI is playing in shapeing the future of the car structuring the vehicle design efficiency self-driving technology speed limits manufacturing process we all is bringing fresh air into the car in companies are getting to make their AI technology is just doing automotive sector accident flow reduce emissions over the a guided vehicle size in on $1. 8 billion in 2020 and in the same year the automotive company produced 5 lakh 97,037 vehicles. As AI continues to grow and make a mark on automotive industry, more AI implementations in the automotive sectors are coming shortly. Now I hope you all guys got some idea about how AI is ruling the automotive sector. Let's move forward with the benefits. Here are some benefits of AI in automotive. Autonomous driving, parking management, predictive maintenance, enhanced safety, road condition monitoring, personalized voice assistant, and security. Let's start with autonomous driving. AI helps cars drive by themselves by processing the data from the sensors. This helps vehicle drive safely and more conveniently. The goal of autonomous driving is to ensure the passenger safety. Next comes enhance safety. AI detects dangers on the road and helps cars avoid accidents. The purpose of this is to avoid accidents caused by the human errors. Parking management. AI helps identify parking spaces in the vacant lots and calculates the space occupied by the parked vehicles. This includes features like automated parking assistance, predictive parking availability and even autonomous parking capabilities where the vehicle can park itself without the human assistance. Road condition monitoring AIdriven algorithms can detect the intensity of the road damage around the area and also detect the cracks thereby helping suitable authorities take action for the road maintenance. Predictive maintenance. AI can tell when a car needs to get repaired before it gets damaged. This proactive approach allows for timely maintenance schedule, reducing the risk of unexpected breakdowns and ultimately enhancing the vehicle safety. Securitying car theft access through its advanced feature ensures the safe consumers in the automotive landscape. personalized voice assistant. Some car develop their own voice recognition can make calls inform the driver about the temperature and many more. Now let's explore how automotive industry is making use of AI technologies in driver monitoring system manufacturing mission monitoring connected vehicles facial recognition and sensor fusion system. Driver monitoring system. This system continuously monitors the driver's action. It uses sensors and cameras to track the driver's movements. It checks whether the driver is attentive or dizzy. And if it detects something wrong, it alerts the driver by the alarm. AI in manufacturing. As we all know, manufacturing is one of the basic aspects of the vehicle design. Manufacturing requires a large workforce which increases the labor and it is also a time consuming process. AI in automotive manufacturing makes everything easier. It handles the entire process from designing the cars to quality control and monitoring the performance. Also ensures everything happens on time. Emission moni warming has become one of the major causes nowadays mainly caused by the emission of the greenhouse gases. So in order to reduce the carbon emissions, some companies are developing AI based solutions to detect and measure various emissions such as carbon dioxide, nitrogen oxides etc. This reduces the environmental impact and ensures the cleaner transportation. Connected vehicles AI has now made it possible for cars to communicate with each other. Vehicles connected will make driving safer, more accurate by sharing the information about the traffic, travel time and road conditions. This

### [8:35:25](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=30925s) AI in Manufacturing

helps cars prevent accidents and find the fastest route. Facial recognition in cars to improve safe facial recognition technology has been introduced in the vehicles. This helps the car owners in many ways such as setting permissions and restrictions. For example, they can set speed limit for the children and they can set permission for people other than the family which increases the security. AI based facial recognition is very powerful because it recognizes the owner even if he changes his appearance like wearing sunglasses, a beard etc. Sensor fusion system. Sensor fusion system collects the data from the sensors such as cameras, radar, leader and ultrasonic sensors to provide the information about the vehicle's surrounding. AI algorithm then process this data and combine it to detect the objects and predict the behavior. These systems activate the advanced driver assistant features including adaptive cruise control and pedestrian detection. Have you guys ever wondered how the future of cars is deeply connected with AI? Let's discuss. So far AI in automotive sector has been popular only for its self-driving vehicles. But there are so many other advanced features that are raising AI in the automotive industry. Here are some predictions of AI in automotive. AI can predict the future of the cars. It uses information about how cars are being used, how they perform and how people interact with them to make the educated guesses about what might happen in the future. This can help in designing better cars, predicting maintenance needs and understanding what people want in the future. Echofriendly roads. This can include things like optimizing traffic flow to reduce the emissions, suggesting efficient roots, and designing vehicles with the material that help the environment like those that absorb pollution or generate renewable energy. Revolutionizing insurance. This is based on how safely you drive rather than statistics. Plus, if there's an accident, AI can help speed up the whole process, making it easier for you to get back on the road. Better road smarter cities. AI can help manage traffic, find the best routes, and keep the road safe. For instance, smart traffic lights can change based on how many cars are on the road, reducing traffic jams. So, by using AI, cities can become more smarter places to live in. 5G, a partner for AI. No doubt 5G and AI makes a powerful duo. 5G provides lightning fast and reliable connectivity, allowing AI systems to gather and process data in real time. Together, they enable innovations that were previously unimaginable, like instant communication between vehicles for safer roads or realtime monitoring or predictive maintenance. Now, let's see how big car companies are implementing AI in various automotive sectors. BMW's AI enabled engineering. Monolith is well-known AI based software in the engineering world used by many aerospace and automotive companies. In 2022, BMW announced that its engineers are now using monolith in vehicle development. This means that they use AI to accurately predict a car's aerodynamic performance without building a physical prototype. Tesla vehicles work on advanced driver assistance systems and self-driving capabilities using AI algorithms for decision making and driving control. VIMO, one of the leading autonomous driving tech companies, has successfully implemented a self-driving system that uses artificial intelligence for navigation and responding to the surrounding environment. Kia, it is one of the oldest manufacturers in South Korea. It implants computer vision inside a vehicle to make the driving experience more comfortable by personalizing mirrors, seat positioning, and other features to the driver's needs. AI is just making a miracle into the car industry. It is turning cars into smart machines that can do more than just transport us from point A to B. Significant investments have been made in the tech leaders such as Google, Tesla, Uber, and major car companies in the region. It is not an opportunity for the big companies but also a start. Although security, privacy issues and few challenges are here but AI in automotive sector is achieving limelight and will grow in upcoming years. Recently black mamba and AI generated malware gained significant attention in the domain of cyber security. The sophisticated thread leverages large language models to create a polymorphic key logger that constantly mutate its code which makes it incredibly difficult for traditional cyber security measures to detect and stop. But you know what? Every coin has two sides and it is same for AI as well. While cyber criminals are misusing AI to create potent threats like black mambber, the same technology can also be a powerful ally in this fight against cyber attacks. While AI generated malware like blackmamba presents a formidable challenge, AI can also be a gamecher in cyber security defenses. By harnessing the capabilities of AI, cyber security professionals can gain a significant advantage in detecting, responding to and even predicting the potential attacks. Now after listening to this term artificial intelligence, the first thing that comes to our head is automation. Those long manual procedures and methods can be done by just few clicks and prompts. This concept of automation through AI applies to the cyber security field as well. But how and in what terms? Let's find out. Starting from basics, we know that cyber security is the practice of protecting computer systems, network and data from unauthorized access attacks and cyber threats. Now if we take a look at some of the cyber threats then we have fishing attacks, password attacks, malware, insider threats, DDoS attacks and much more. Now if you're familiar with these terms then well and good. But if you're not then don't worry. You can enroll in a cyber security training course to know more about these. However, let's discuss these in short and then check how AI brings its magic to solve these problems. So coming to fishing attacks. Now, fishing involves sending fake emails or messages. It looks like they

### [8:41:53](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=31313s) AI in Cybersecurity

are from trusted sources to trick people into giving away personal information like passwords or credit card numbers. AI can scan emails and messages for signs that they are fake. Fishing involves sending fake emails or messages that look like they are from trusted sources. It is basically used to trick people into giving away personal information like passwords or credit card numbers. Now what AI can do? AI can scan emails and messages for signs that they are fake like strange sender addresses or suspicious link. For example, if an email claims to be from a bank but has odd formatting or grammar, then AI can flag it as fishing and warn the user to not to click any of the links or provide any personal information. Then comes the password attacks. This includes attempts to steal passwords by guessing them or using stolen data. Now what AI can do is it can monitor login attempts and notice if there is something odd like someone trying to log in repeatedly with wrong passwords or logging in from a new location suddenly. AI can alert users or block access until it can confirm the user's identity. Then comes the malware. Now malware is software designed to harm or exploit any programmable device or network. The black mamba we saw at the start of the video is an example of AI generated malware. AI systems can analyze files and software behavior to detect unusual activities that might indicate malware. For instance, if an application tries to encrypt files suddenly, then AI can stop it and prevent the damage. Then comes inside threats. These threats come from people within the organization who might misuse their access to data or systems. We saw black mobile malware, but trust me, these are the real snakes in an organization you encounter. AI can keep an eye on how users access and use the data. If it spots unusual patterns like someone accessing files they shouldn't be or at odd times, it can alert security managers to investigate further. Then comes DDS attacks. DDS stands for distributed denial of services. It flood a website or online service with so much traffic that it crashes. AI can monitor web traffic and recognize when there is an unusual spike that could mean a DDoS attack. It can help to redistribute or block this traffic to keep the site running smoothly. Now, these were just some of the cyber threats. But by looking at these examples, we can see that by using AI, organizations can automate the detection of these threads and respond much more quickly than if they had to rely on people noticing and reacting manually. Imagine manually handling inside threads which involves regular audits and by security posters to review user activities and access privileges. Tough job, right? This gives the answer to why AI is needed in cyber security. AI's ability to learn and adapt makes it particularly good at handling new or evolving threats which is essential in today's fast-paced digital world. So by this time I believe that you understood the need of AI and how it helps in the field of cyber security. Now let's check how exactly AI works to protect our system from cyber threats. The AI follows four basic steps to protect or fix a problem in our system which is investigate, identify, report and research. Now let's take a scenario and understand these steps. An employee at a large corporation clicks on a link in a fishing email which results in downloading ransomware onto their computer. Now the ransomware begins to encrypt files preparing a demand to ransom for the decryption. Now let's see how AI can help in this situation. So the first step is to investigate. Now what happens is that an AI system is constantly monitoring network behavior and file activities across the company's computers. It detects unusual activities specifically files being encrypted at a rate and in a pattern that does not match typical user behavior. In this case, AI notes that hundreds of file are being encrypted rapidly in one of the department's share drive which is abnormal compared to its usual file activity. Now after investigating the scenario, it goes to the next step which is to identify. Now this step involves identifying the specific type of ransomware attack. Using its database and machine learning models, the AI quickly compares the encryption behavior with known patterns of various ransomware. Within minutes, it identifies the activities as a specific type of ransomware attack. In this case, the AI identifies the malware as wide because the patterns of file encryption and the ransomware note format matches its data. Then comes the step of the report. Immediately after identification, the AI system sends an alert to the cyber security response team. The alert includes details about the suspected ransomware, the affected computers and drives, and preliminary recommendations for the containment. In this case, the security team receives an automated alert with all the relevant information including the severity of the alert suggesting immediate isolation of the affected system to prevent network spread. Now comes research. Post incident, AI uses this incident to refine its detection algorithms and examines the entry point, updates its fishing detection capabilities to recognize similar threats in the future and strengthen its defenses against a particular ransomware family. Basically, the logs the incident details and starts an analysis process to understand how the ransomware bypassed the existing defenses and whether any new pattern of behavior associated with this variant. So with AI's quick action, the ransomware spread is halted almost immediately, minimizing damage. The security team swiftly removes the ransomware from the affected system, restores encrypted files from backups, and implements improved safeguards advised by the AI's research findings. This example shows how AI in cyber security can operate seamlessly across all four steps providing an integrated approach to threat management that is both efficient and regularly improving. Wherever it comes to the security, the stakes are always high and the sensitivity of the data involved mean there is very little room for error. Relying on AI in such scenarios presents its own set of challenges. For instance, AI can exhibit biases based on the data it has been trained on, which potentially leads to skewed or unfair decisions. AI systems can sometimes even generate false positives, flagging normal activities as threats or false negatives when actual threats go undetected. These issues underscore the risk of depending solely on AI for security measures. Despite these challenges, AI remains a valuable asset in the realm of cyber security, primarily due to its ability to analyze vast quantities of data swiftly and continuously. However, it should be integrated carefully. And now comes the thing. While AI is a powerful tool for cyber security, AI works best when it's used in conjunction with human expertise. Relying solely on AI could lead to vulnerabilities, especially if AI is expected to handle all aspects of security without human oversight. Therefore, a balanced approach that combines the speed and efficiency of AI with the critical thinking and adaptability of human security experts is the most effective strategy. This way EI handles the heavy lifting of data processing and routine task while humans steps in for more complex decision making and strategy. AI for testing. AI analyzes tons of data to unheard hidden glitches before they even cause the cures. This frees you up for the creative testing strategies. Imagine happy users revving about your bugfree software. AI is your partner but remember data is the key. Keep it fair and your expertise will still lead the way. Together you and AI can conquer the software development. AI testing is also known as intelligent testing. Employees artificial intelligence to optimize and expedite the testing procedures of the software. It aims to access the effectiveness, performance and also the dependability of the software by automating functions like executing test, validating data and detecting errors. AI for testing is becoming the mainstream. Some 21% of IT leaders surveyed said they are putting AI trials approves by concept in place. According to which 2021 world quality report speaking to longerterm trends only 2% of the respondents said AI has no part in their future plans. If you are being waiting to this AI's hype it's time to dive in. Here's what you all need to know about AI in testing. The most essential foundation of AI is applied by testing softwares like machine learning and the neural networks. Machine learning allows computers to classify objects and make predictions about the likelihood based on their data. Neural network loosely eliminate the data how the part of the brain that predates humans make associations. Let's learn some essential benefits of AI for testing. Firstly, finding the right set of people. Businesses may overcome the difficulty of finding the suitable team and skill set by leveraging AI based test automations. and it is also having a semi or completely scriptless scripting environment. Next, let's learn about the amount of time spent on the

### [8:51:41](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=31901s) AI for Startup

repeated jobs. Every time a new test cases or the new automation project arises, regardlessly how reusable the component are, teams wind up creating a lot of comparable codes again which take a longer time. AI can generate test scripts that has a project input resulting to automatically generate test scripts for comparable projects. The last essential benefit is the flaky test. Teams of testers spend hours accessing whether a test is failed or were caused by application bugs or poorly prepared test cases. These kinds of test failures are also known as the flaky test and they also caused up release to be held up resulting in the software delivery delays. AI can assist teams overcoming its difficulty for flaky testing. Next, let's learn lastly about the AI changes necessate frequent script modification. Businesses frequently adjust the app user interface to deliver a constant user interface that is the UX UI. Even if the change is modest or invisible, it can also provide a lot of test scripts to fail while executing the various operations on the page. That will cause a problem of AI and ML algorithm based technologies made trained to detect tiny changes of the code and it may also cause a reducing the need of human intervention in the script updates for such modest modifications. What are the various methods for AI based testing automation? First let's learn about regression suit automation. Regression testing takes a lot of time and effort from the tester. Regression suit automations with AI conduct automates testing intelligently depending on the changes in the code each time. The main goal of this test is to decrease the time of the test cycle and finding and running the proper collection of the test cases. Secondly, defect analysis and prediction. It entails where application of machine learning and the natural language processing methods to aid its accurate detection of software flaws. The major goal of this test is to assure the early fall detection and assist the firms in getting a market faster. The last method is self-healing automation. Automatic healing of the test automation script breakage that may arise from the object or the other property changes in which the AI enabled self-healing process. The primary goal of this is to guarantee the less manual interventions is required and self-healing process is accelerated. I would be telling you some four important tools for the automation testing. Firstly, the Sophie. Imagine a tireless teammate wear a mobile app testing for you. That's a Sophie AI, your AI testing buddy. No coding needed. Next is your functionize. Functionize is an AI testing platform that uses machine learning functions to supercharge your software testing. Next, lastly, is your parasoft. Parasoft isn't a single AI testing tool, but it's a suit offering various solutions. A software development landscape is undergoing a seismic shift. AI isn't a futuristic concept anymore. It's a powerful partner of testing arena. That's the magic of AI for testing. Ultimately, here's an argument. Your expertise not replace it. But embracing AI is your testing partner. You can unlock a future of exceptional software that delivers a stealer user experience. Now let us understand what's the importance of AI in ethical hacking. Ethical hacking is all about finding vulnerabilities before the attackers do. But in today's digital landscape, the threat volume is staggering. So traditional manual methods can sometimes miss subtle and emerging threats. So this is where the AI comes into play. Rather than replacing human expertise, AI is here to amplify our capabilities. With machine learning and deep learning, AI can analyze massive amounts of data, quickly spot anomalies, and even predict potential attackers before they occur. So imagine having a digital assistance that never sleeps, and continuously monitors your network for suspicious patterns. That's the power of AI in ethical hacking. With its significance clear, let's dive into the innovative AIdriven tools and techniques that empower our defenses. And the first tool on the list is Sentinel AI by Dark Trace. This tool uses unsupervised machine learning to detect real-time anomalies in network traffic. It can flag zeroday exploits even before signatures are available. Next on the list is Penta automated pen testing. Penta mimics real hacker behavior to scan for vulnerabilities automatically. It's like having a virtual penetration tester that works 24 by7. Next is the IBM Q rider advisor with Watson. This tool leverages natural language processing to help pass threat intelligence reports and suggest precise remediation steps during incidents. Next we have is Crowd Strike Falcon. An AI powered platform that provides endpoint protection by detecting and preventing malware, ransomware and advanced persistent threats. It offers real-time monitoring, behavior analysis and incident response capabilities making it ideal for enterprise environments. Next is the silance project utilizes AI and machine learning algorithms to prevent cyber attacks including malware and ransomware by analyzing files and behaviors before execution. It blocks malicious activity before it affects systems. Next is the platforms like hacker 1 AI and sinet 360 auto XDR. These tools enhance bug bounty programs and offer integrated security solutions for smaller teams. Each of these tools automates parts of the vulnerability assessment process reducing the time it takes to detect and respond to threats. So thereby significantly boosting the efficiency and accuracy of ethical hacking. Now having examined these cuttingedge solutions, let's see how they translate into real world success stories. So consider the example of a fintech startup that was struggling with traditional vulnerability scans which took days to pinpoint issues. And by integrating AIdriven penetration tools, the startup was able to uncover a critical SQL injection flow in just a few hours, saving them both time and millions of dollars. So this isn't an isolated case. Industries worldwide are embracing AI to strengthen their security. From large-scale events like Microsoft zero day quest which incentivizes AI focused vulnerability research to smaller enterprises deploying AI enhanced defenses. The trend is clear. AI is becoming a gamechanger in ethical hacking. After seeing tangible results in action, it's time to address the ethical considerations and challenges associated with leveraging AI. While AI tools dramatically enhances our ability to detect and respond to threats, they can also be exploited by malicious attackers if they fall into the wrong hands. This dual use dilemma calls for strong ethical frameworks and regulatory oversight. As ethical hackers, we must ensure that these tools are used responsible with continuous human oversight and rigorous training. Now, looking ahead, the future of ethical hacking is both exciting and challenging. Emerging trends include AI ratings, autonomous systems that simulate advanced persistent threats to test your security posture. Next is the zero trust security models, ensuring no one inside or outside your network is trusted by default. Next, integrations with cloud and IoT security. As more services move online, AI will be key to monitoring complex multi cloud environments and IoT devices. Next, quantum resistant encryption. Preparing for a future where quantum computing might challenge current encryption standards. This innovations undergoes the need for a proactive and forwardthinking approach to cyber security. As we continue to harness AI, staying ahead of the curve with continuous learning and adaption will be critical. Did you know that Mercedes-Benz team up with Microsoft Azure to make your driving experience super smart? Yep, it's true. So, back in the day, driving

### [9:00:15](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=32415s) AI for Testing

meant pushing buttons and turning knobs. But now, your Mercedes can understand your voice comments. So, just imagine saying, "Hey, Mercedes," and your car does what you ask. Pretty cool, right? So, how did this magic happen? Well, Mercedes-Benz got creative and with the help of Microsoft Azour's OpenAI services, they made it possible to enhance the drivers experience of Mercedes-Benz. Microsoft Azure is a collection of cloud-based AI services that provides pre-built AI models and APIs which are referred to as artificial intelligence in Azure. So, these services and tools help developers and businesses build, deploy, and manage their AI application. Azure CI services enable a wide range of AI capabilities from machine learning and deep learning to natural language processing and computer vision. So Azure provides a powerful platform for building intelligent application using advanced analytics, machine learning and artificial intelligence. So let's explore the real world intelligent apps you can build on Azure. First we have connected products. So the devices that communicate and exchange data like smart home gadgets and industrial machines and it benefits from efficiency by reducing maintenance costs with protective maintenance and it also provides a customer experience by providing real-time monitoring and controlling. And next is the transaction processing at scale systems that handle large volumes of transactions such as online shopping or banking system and it benefits from scalability, readability and speed. Next we have the support boards and information discovery which are the chat boards and AI system that provide automated customer support and information retrieval. So this system offers advantages such as 24-hour availability, costsaving and improved customer satisfaction. And next we have personalization and recommendation. This personalization and recommendation engines tailored content products and services to individual users based on their preferences and behaviors. This is commonly used in e-commerce streaming services and content delivery platforms. So leading to business benefits such as increased engagement and higher conversional rates. And next we have AI co-pilots. So intelligent assistance that provides suggestions and automate task enhancing productivity and by offering benefits such as productivity, efficiency and innovation. Now let's have a look at Azure AI services. First we have Azure AI search. So it adds smart search to your app. If for example it helps users find products quickly on an e-commerce site. Next Azure OpenAI service. It performs task like text generation and language understanding. For example, it creates chatboards that understand and responds like human. Then we have board service. It creates intelligent chatboards like for example it provides automated customer supports on your website. Next content safety. It detects unwanted content like for example it filters out unappropriate comments on social media. Next is the custom vision. So it customizes image recognitions for specific needs like identifying detected products on a production line. Next comes the document intelligence. This turns documents into useful data like for example it extracts information from invoices automatically. And the next service is face. So it recognizes faces and emotions in an image. Like for example, it verifies identity for secure access. The next AI service is immersive reader. So this helps people read and understand text better. For example, it assists students with reading difficulties. The next Azure AI service is language. So it understands and process natural language. For example, it analyzes customer reviews to determine satisfaction. Up next, uh the speech which converts speech to text and vice versa and translate languages. For example, it provides real-time subtitles during a conference call. Next, translator, which translates text between different languages like converts product descriptions into multiple languages. And then we have video indexer. It analyzes video content to find useful information like tags scenes in videos for easier searches. And also we have vision which analyzes and understands image and videos. If for example it automatically tags and sorts photos in an app. So these services help you build smarter applications that can search, understand language, recognize images and speech and more. So making your work easier and more efficient. So following this let's discuss on what type of AI is Azure. So Microsoft Azure offers a wide range of AI services that cover several types of AI designed to cater to different use cases and user needs. So these types of AI in Azure include machine learning. So Azure machine learning provides tools for building, training and deploying machine learning models. It supports various algorithms such as supervised, unsupervised and reinforcement learning. So it allows users to create predictive models for task such as classification, regression, and clustering. Next is Azure cognitive services which is like a smart toolbox full of readytouse tools for understanding and processing information. So developers can easily add these tools into their apps without needing to be an AI expert. Like for example, with computer vision, apps can see and understand images and videos. And in speech services lets apps

### [9:05:40](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=32740s) AI for Ethical Hacking

understand and talk back to users, even translating languages. And language services helps apps understand and process human language like analyzing sentiment or translating text. And then we have decision services that helps apps make smart decisions like spotting anomalies or giving personalized recommendations. And in addition to this, Azure also supports deep learning which is used to build advanced models using frameworks like TensorFlow and conversational AI like creating chat ports that can talk to users naturally. And then automated machine learning makes it easier to build quality models without deep ML knowledge. And finally, reinforcement learning which is useful for teaching models to learn from rewards or penalties like in games or chatbots. So moving ahead, let us understand how does AI work in Azure. So have you ever wondered how AI works in Microsoft Azure? Well, let me break it down for you. So Azure offers a bunch of cloud-based tools and services as we already discussed. So that will help you to build, deploy, and manage GI application. Here's a quick overview of how it all comes together. So first up we have got the core components. The first one is data preparation and storage. So you can store all your data whether it's images, videos or documents in services like Azure blob storage or Azure data lake. And next we have model development and training. So Azure machine learning is your go-to here. So it gives data scientists and developers like tools they need to build, train and deploy machine learning models. Plus, there's a support for automated machine learning, making it super easy to pick the best model for your task. And next, we have pre-built AI services. Azure cognitive services provides pre-built AI capabilities. So, need to recognize images or convert speech to text? It's got you covered. Next, deployment and management. So, once your model is ready, you can deploy it using services like Azure Kubernetes Service or Azure Functions without worrying about managing infrastructure. Finally, we have analytics and insights. Azure Signaps Analytics helps you analyze large data sets to get valuable insights, which you can then use to improve your AI models. Now, let's walk through a typical workflow. First, you start by collecting data from various sources and storing it in Azure. Then, you clean and analyze the data using tools like Azure data bricks. Next, you develop your model using Azure machine learning and experimenting with different algorithms and techniques. And once your model is trained and evaluated, you deploy it to production. And finally, you monitor its performance and make adjustment as needed. And there you have it. That's how AI works in Microsoft Azure. Pretty cool, right? Now, let's discuss the awesome benefits of Azure AI services and how they can supercharge businesses and developers alike. First, we have ease of use. Azure AI makes it super easy to add advanced AI capabilities to your apps without needing to be an AI expert. Next is scalability and flexibility. So whether your apps get a few users or millions, Azure can handle it. Then we have integration capabilities. Azure plays nice with others like making it easy to connect with other Azure services and your favorite tool. Then security and complaints. Your data is safe and sounds like Azure's topnot security feature and compliance certification. The next benefit is cost efficiency. With Azure, you only pay for what you use and saving you money and keeping your budget happy. The next benefit is advanced analytics and insights. Get realtime insights from your data to make smarter decisions and uncover hidden patterns. Finally, we have customization and personalization. So tailor your AI model to fit your needs and keep getting better over time. Now let us discuss the six principles of Azure AI. First we have fairness. Azure AI treats everyone equally without showing bias. So it works to make sure that its decisions are fair and unbiased. The next principle is reliability and safety. Azure AI is dependable and safe to use. So it's designed to work consistently and predictably with safeguard in place to prevent accidents or mistakes. Next comes the privacy and security. So Azure AI keeps your information safe and private. It uses strong security measures to protect your data from being accessed or used by unauthorized people. Next is inclusiveness. Azure AI is for everyone. It's made to be easy to use for people of all backgrounds and abilities, including those with disabilities. And the fifth principle is transparency. So Azure AI is open about how it works. It explains its decisions and give you the control over your data. So you know what's happening and can trust it. And the sixth principle is

### [9:10:34](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=33034s) AI on Microsoft Azure

accountability. Azure AI takes its responsibility for its action. So it's accountable for using AI responsible and ethically and it's open to feedback and oversight to make sure it does the right thing. And now moving on to what is Microsoft AI is used for. So it can be used for productivity and collaboration. Microsoft AI powers features in tools like Microsoft Office improving collaborations and automating task to enhance workplace efficiency. And then it will also be used in customer service. So AIdriven stat boards and virtual assistants provide personalized customer support improving satisfaction and reducing the weight times. Next, it can be used in healthcare as well. Microsoft AI aids in medical imaging analysis, patient diagnosis and drug discovery. So, leading to improved patients outcomes and more efficient healthcare delivery. It is also used in finance as AI is used for fraud detection, risk assessment and algorithm trading in finance, helping institutions make informed decisions and mitigate risk. It is then used in manufacturing and logistics where Microsoft AI enables predictive maintenance. quality control and supply chain optimization increasing operational efficiency and reducing the cost. It is also used in gaming, smart cities, retail, education and more. So as a conclusion I just want to say that AI on Microsoft Azure offers a robust platform for businesses as well as developers to leverage artificial intelligence efficiency. So with a range of tools and services, Azure AI simplifies AI development, deployment and management across industries. By providing scalable, secure and reliable solution, Azure empowers organizations to drive innovations and deliver personalized experience to users, ensuring competitiveness in the digital era. Artificial intelligence and machine learning are not just training technologies anymore. They're becoming the backbone of every industry. And right now, the demand for AI and ML engineers is exploding worldwide. In India, AI engineers earn anywhere from 8 lakhs to 36 lakhs per year depending on skills and experience. In the United States, the same roles can start from $120,000 and can go all the way up to $250,000 for senior and specialized positions. So, if you have been thinking about getting into AI and ML, switching careers, or upscaling for higher paying opportunities, there has never been a better time. And in this video I am giving you a complete beginnerfriendly and deeply practical AI and ML engineer road map that shows you exactly what to learn and how to grow in this booming field. And now the first step of your AI journey starts with the strong foundations. And no, you don't need to be a mathematician. You simply need the essentials. So start with Python because Python is the language that powers almost every modern AI system. So focus on the basics like variables, loops, functions, list, dictionaries, file handling and how to work with APIs. So these skills are enough to write simple programs and understand AI code. Next, learn data handling because AI is built on data. Use pandas to clean and organize data, numpy to perform calculation, and mattplot lib or seab bond to visualize patterns. Even simple task like removing missing values or analyzing sales trends will prepare you for the real AI projects. Then learn the essential math behind AI. Not heavy equations, just the basic understanding. Understand mean, median, variance, probability, basic, correlations and what vectors and matrices are. Learn what gradient descent means conceptually so you understand how models learn without getting buried in complex math. So once your foundations are ready, move to machine learning. ML is simply teaching computers to learn from examples. Instead of writing instruction, show the model real world data and let it find patterns. So learn key ML concepts like training and testing, accuracy and precision, underfitting and overfitting, cross validation and feature engineering. So these concepts helps you understand how to build, tune and improve models. Then learn the core ML algorithms that companies use every single day. So you need to start with linear regression, logistic regression, decision trees, random forest, SVM, nave base, K means clustering and PCA. These algorithms cover most practical business problems like predicting sales, detecting fraud, segmenting customers and identifying patterns in large data sets. Then build small ML projects such as house price predictor, spam email classifier, credit score predictor or customer segmentation model. So these projects give you confidence and make your portfolio job ready. After ML, move into deep learning, the technology behind Chad GPD, self-driving cars and medical AI. Start by understanding how neural networks work. Learn what neurons, layers, activation functions, and loss functions are. You don't need to memorize formulas. Just understand how the network adjusts itself to improve predictions. Pick either TensorFlow or PyTorch as your deep learning framework because both are used by companies in production. So you only need to choose one and then build deep learning projects like digit recognition, image classification, sentiment analysis or go for fake news classification. So this projects teach you how to use neural networks in real scenarios. So AI is no longer about learning everything. It's about choosing your specialization. So here we have options. So option A is NLP and LLMs which has the highest demand. So if you want to work with chatbots, smart assistants or language models like chat GPT, choose NLP and LLMs. And all you need to learn is tokenization, embeddings, transformers, bird, GPT models, prompt engineering, fine-tuning, rack, and agentic architectures. And you can build projects like AI chatbots, document search tools, question and answer systems, or customer support bots. Option B is computer vision. If you like working with images and videos, choose computer vision. Learn CNN's, YOLO, object detection and segmentation. And you can build real world projects like face detection, medical image analysis, CCTV monitoring systems or vehicle counting tools. Option C is generative AI. If you enjoy creativity, choose generative AI. Learn GANs, VAE, and diffusion models. build applications like AI art generators, product design tools, image to image systems or video generation models. Option D is MLOps. If you prefer infrastructure and deployment, then choose MLOps. Learn Docker, Kubernetes, MLflow, CI/CD pipelines, cloud deployment, and model monitoring. And you can build projects that focus on deploying ML and LLM models into real environments. All right. So, agentic AI is the biggest trend of 2026. These are not just models. These are the intelligent agents that can reason, plan, use tools and take actions. So, learn how agents work with frameworks like lang chain, langraph and crew based agent architectures. Understand tool calling, memory systems, planning and multi- aent collaboration. also build agentic projects like an AI research assistant, an autonomous email automation agent, a financial analysis agent or a customer service automation agent. So this project stand out in the interviews because companies want people who can build intelligent workflows and not just models. So now that you have skills, you need projects that prove it. So your portfolio should have two machine learning projects, two deep learning projects, two specialization projects, and one real end-to-end AI system. And this final project could be a chatbot with rack, a vision based attendance system, an AI assistant with memory, or a complete ML pipeline deployed on cloud. And finally, upload your work on GitHub. Write clear documentation and add deployment links so employers can test your work instantly. So with this road map you can apply for the most in- demand roles in 2026 such as AI engineer, machine learning engineer, LLM engineer, NLP engineer, generative AI engineer, computer vision engineer, MLOX engineer or AI automation specialist. AI is not just the future is the career shift of today. If you follow this road map step by step, you will build the skills, the projects, and the confidence to enter the world of AI and machine learning. First up, we have Lang Chain. Lang Chain is an open-source framework that simplifies the integration of large language models into applications. Launched in October 2022 by Harrison Cheese, it has quickly become a popular tool among developers for building AI powered solutions. Langin supports multiple programming languages including Python and JavaScript and offers standardized interfaces to connect various LLMs facilitating tasks such as chatbots, document analysis and code generation. Now that we understand what Lankchen is, so let's explore its key features that makes it a powerful tool for AI powered applications. All right. So, Lchen offers several essential features that enhances its capabilities such as prompt management that helps in structuring and optimizing prompts for better AI responses. Next is for the memory and context handling. It enables AI models to remember past interactions making conversation more natural. Next is the seamless API integration. It allows easy connection with various AI models, databases, and external tools. Next is the modeler and customizable where developers can tailor Langchain to their specific needs by using its flexible components. The next key feature is multiLM support where it supports multiple language models giving user the flexibility to choose the best one for their use case. So these features make langin a go to choice for building AIdriven applications with efficiency and scalability. Now that we have covered the key features of lang. So let's dive into some real world applications where it plays a crucial role. Lang chain is widely used across various industries to enhance AIdriven solutions. So here are some of its key use cases. So first we have the customer support chatbots. Land chain enable intelligent supports that provide instant and accurate responses improving customer experience. Next is the AI research assistance. It helps researchers by summarizing information, generating insights and answering complex queries. The next use case is document analysis. Lantern can process and analyze large volumes of documents, extracting relevant information efficiently. Next is the e-commerce personalization. It enhances online shopping experience by providing tailored recommendations and intelligent search functionalities. Next is the AI powered email assistance. Here Langchain helps automate email responses, draft messages and manage communication effectively. And with this powerful applications, Langchain is transforming the way AI interacts with businesses and users. Now let's move on to the next framework that is Langflow. Landflow is a user-friendly low code interface designed to simplify the process of building langchain applications. It provides a visual way to create AI workflows, making it easier to design

### [9:23:00](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=33780s) AI and Industry 4.0

test, and deploy applications without writing extensive code. And by leveraging LNF flow, developers can seamlessly integrate different components and streamline their AI development process. So now let's take a closer look at how we can get started with it. So now that we have a basic understanding of Langflow, let's explore some of its key features that make it an efficient tool for building AI applications. Well, Langflow comes with several powerful features that enhances the AI development experience such as drag and drop interface. Langflow offers an intuitive drag and drop interface allowing users to visually create and manage workflows without extensive coding. The next feature is pre-built AI components. It provides a collection of readytouse AI components making it easier to integrate different functionalities into your application. Next is the realtime debugging. With real-time debugging, you can test and troubleshoot your workflows instantly, ensuring smooth functionality. And the next feature is flexible deployment. While LandFlow supports flexible deployment options, enabling users to implement their projects in various environments with ease. Lastly, it is designed to be beginner friendly, allowing even those with minimal coding experience to build AI powered applications efficiently. And with this features, Langflow simplifies the process of developing AI applications, making it an excellent tool for both beginners and experienced developers. Now that we have seen the key features of Langflow, so let's dive into its practical applications across various industries. Langflow has a wide range of use cases making it a valuable tool in different domains. First here we have education assistance. AI powered assistance helps students with personalized learning, answering queries and providing steady recommendations. The next use case is healthcare chatbots. This chatbots assist in patient interaction, symptom checking and appointment scheduling improving healthcare accessibility. Next as a financial advisor AIdriven financial advisor provide insights, investment suggestions and risk assessments for better financial decision making. Next is the automated lead generation where businesses can leverage AI to identify potential customers, qualify leads and streamline sales efforts. And finally, the marketing automation. AI enhances marketing campaigns by automating content recommendations, customer segmentations, and engagement tracking. These use cases demonstrate how LandFlow is transforming industries by enabling intelligent automation and enhanced decision making. Now, let's move on to the next framework that is Ola. Ola is a platform that enables users to run and interact with large language models locally on their own computers. This approach democratizes AI technology, making it accessible for everyday use without relying on cloud services. And by providing access to various fine-tuned LLMs, Olama allows developers and researchers to integrate sophisticated language understanding and generation capabilities into their applications such as chatbots, content creation tools, and research projects. Olama's user-friendly interface and API support makes it a versatile tool for implementing AI solutions across different platforms. Now that we understand what Olama is, so let's take a closer look at some of its key features that makes it stand out in the world of AI models. Ola comes packed with powerful features that makes it a gamecher in the AI space. So first here we have offline AI processing where it runs AI models locally without relying on cloud services ensuring faster response time and better control over your data. Next is the custom model support. Ola allows you to fine-tune and integrate customer models giving you flexibility to tailor AI to your specific needs. Next is the crossplatform compatibility. Whether you're using Windows, Mac OS or Linux, Olma seamlessly runs on different operating systems. Next is its lightweight and efficient feature. Optimized for performance, Ola ensures smooth execution even on devices with limited resources. And the next feature is privacy first. Since everything runs locally, your data stays secure without being sent to external servers. And with this features, Olama makes AI more accessible, efficient, and userfriendly. Ola's versatility makes it a gamechanger across multiple industries. So, let's dive deeper into its impact. First, we have offline AI assistance. This allows users to interact with AI models without requiring an internet connection, ensuring accessibility in areas with limited connectivity. Next, Olama plays a crucial role in AI for healthcare, helping with medical document analysis, diagnostics and patient data management, ultimately improving healthcare outcomes. Moving on to the cyber security AI, where AIdriven security solutions help detect and mitigate cyber

### [9:28:01](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=34081s) AI Engineer Learning Path

threats in real time, enhancing digital safety. Olma also extends its capabilities to legal document processing, automating document review, contract analysis and compliance checks, making legal work more efficient. Lastly, we see the impact of AI in remote areas empowering communities with AIdriven solutions even in low connectivity regions from education to disaster response. With such diverse applications, it's clear that Ola is transforming industries and making AI more accessible than ever. Now let's move on to the next framework that is Llama index. Llama index is a data framework that enables the connection between large language models and external data sources. It provides tools to structure, query and manage data, facilitating the integration of LLMs into various applications. By offering a suit of data connectors and indices, Llama index simplifies the process of automating LLMs with external information enhancing their performance and versatility. Now that we have a basic understanding of Llama index, so let's explore some of its key features that makes it a powerful tool for enhancing LLM capabilities. Lama index comes packed with a several powerful features that simplify data handling and enhance large language model interactions. So first here on the list we have advanced data indexing. Llama index structures and organizes data efficiently making it easier for LLMs to retrieve relevant information quickly. Next is optimized for rack which stands for retrieval augmented generation. It enhances the performance of LLM by integrating external knowledge sources seamlessly. The next feature is multisource data injection. Lama index supports data integration from multiple sources including databases, APIs and documents. Next is the efficient query processing. It streamlines the way LLM interact with data ensuring fast and accurate responses. And the next feature is the seamless LLM integration. Lama index is designed to work smoothly with various large language models enhancing their ability to process and generate contextual responses. And with these features, Llama index empowers developers to build smarter AI applications by bridging the gap between LLMs and external data sources. Now that we have covered the key features of Llama Index, so let's explore some of its real world applications across different industries. Llama index is widely used in various domains to enhance information retrieval and decision making. So here are some of its key use cases. First is the enterprise search engines. organization leverage lama index to build intelligent search systems that quickly fetch relevant data from vast internal repositories. The next use case is the financial data analysis. It helps in analyzing complex financial data sets, extracting insights and automating reporting processes. Next is the legal research assistance. Lawyers and researchers use Lama index to retrieve case laws, legal documents and references efficiently. The next use case is the AI powered journalism. Journalists and content creators utilize it to shift through large volumes of information, summarize reports and generate insightful articles. Next, in healthcare data retrieval, the medical industry benefits from Llama index by accessing patient records, research papers, and clinical guidelines in structured manner. By integrating Llama index into these domains, businesses and professionals can streamline workflows, improve accuracy, and easily make datadriven decisions. Now, let's move on to our last framework that is hugging face transformers. Hugging face transformers library is a powerful open-source toolkit for working with state-of-the-art transformer models. It supports PyTorch, TensorFlow and Jax offering pre-trained models like BERT, GPT2 and T5 for text classification, translation and summarization task. With an easytouse API, it simplifies the implementation of NLP vision and audio applications. Now that we have a basic understanding of hugging face transformers, now let's explore some of its key features that makes it a gamechanger in AI and NLP development. Hugging face transformers comes packed with a powerful features that simplify AI development. Features include pre-trained AI models. Hugging face offers a vast collection of pre-trained models saving time and resources while achieving high accuracy. Next is the multi-framework support. It seamlessly integrates with PyTorch, TensorFlow, and Jack allowing flexibility for different development needs. Next is the state-of-the-art performance. The library provides cutting edge model optimized for various NLP vision and audio task. The next feature is easy model deployment with hugging face tools and APIs. Deploying models into real world application is simple and efficient. The next feature is open-source and communitydriven. The platform thrives on contributions from a strong developer community ensuring continuous improvements and innovation. With this robust features, hugging face makes AI development more accessible and efficient. Now that we have explored the features of hugging face transformer, so let's take a closer look at some of its real world application across various industries. Hugging face transformers have transformed AI with their versatile applications. So let's explore some key use cases. Use cases includes charge and b alternatives. These models power AIdriven chatbots enabling humanlike conversation and intelligent responses. Next in speech recognition and translation. Hugging face models enhance voice assistance, transcribe speech to text and facilitate multilingual communication. Next, AI powered image recognition. Advanced AI models detect objects, recognize faces, and analyze visual content for various industries. The next use case is personalized AI written assistance. AI power tools helps user generate text, summarize content and enhance writing efficiency. Finally, fake news deduction. Transformers assist in identifying misinformation and verifying news credibility using NLP techniques. With this powerful use cases, hugging phase is driving innovation across multiple domains. So to sum up, selecting the right framework is essential for building robust and scalable applications. By considering factors like performance, ease of use, and security, you can make an informed choice that aligns with your goals. So choose wisely. So yeah, AI is changing everything around us. It's how we shop, learn, work, or even talk to the websites. It's fast, innovative, and growing every single day. So, let us go through each of the 15 AI skills one by one and see how they can help you to stay ahead in 2025. All right, then without wasting any time, let's dive right into our very

### [9:35:13](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=34513s) Top 15 AI Skills You Need to Know

first AI skill. So, the first skill that we have is generative AI. Generative AI is the future of creativity. It allows the machine to generate text, images, and videos. Tools like chargi and mjourney are transforming the industry. Even the trends like Gibli style AI art are showing how powerful and fun this tech can be. In the future, content creation won't just be a human-driven. It will be a partnership between humans and AI. And if you're wondering where this leads in your career, think roles like AI content strategist, creative technologist, and even generative AI product lead. Now that's it for the generative AI. Let's move on to our next powerful skill that is machine learning. Machine learning helps the computer to learn from data and improve over time without being explicitly programmed. Whether it's suggesting your next movie or helping car drives themselves, it's the brain behind today's smart technology. As industries rely more on data, ML becomes even more critical for problem solving and automation. So, how does this help you? Learning ML can unlock the paths like becoming a machine learning engineer, AI researcher or data science expert. All right, now let's move on to our next skill. So, next up is something every AI model relies on. Data analysis. Data analysis is about digging your data and pulling out the insights that matter. Every company, no matter the industry, wants to know what's working, what's not, and what to do next. That's where data analysis comes in. It's important because without data interpretation, even the most powerful AI model are blind tools. If you love patterns and storytelling with numbers, this could lead you into careers like analytics, business intelligence, and even businesses. So, now let's move on to something more conversational, natural language processing. NLP helps the machine to understand and respond to human language. It what's power your smart assistants, spam filters, translation tools, and chatbots. As we move forward more human like AI, NLP becomes even more important for smoother and smarter communication. This is your entry into careers like NLP engineer, AI chatbot designer or even a voice tech specialist. So that's it about natural language processing. Now let's uncover the art of asking the right question that is prompt engineering. So prompt engineering is all about how to talk to AI tools to get the results you want. It sounds simple, but it's a superpower in today's AIdriven world. Whether it's writing, coding, or designing, great prompts bring out great outputs. This skill is already in demand and growing fast, especially for creative and tech teams. And yes, it can lead you to unique roles like prompt designer, AI workflow trainer or automation specialist. So, let's keep the momentum going and meet our next multitasker AI that is AI agent. So AI agents are the system that can think, act and make decision independently. They are designed to handle the task for you like booking appointments, answering queries and managing task in the background. They are becoming the backbone of personal assistant and business automation tools. If you are into building smart solutions, you could end up as a AI agent developer or automation system architect. So next up, let's talk about building trust in AI that is explainable AI. So yeah, this skill ensures that AI decisions are transparent and understandable especially in critical areas like healthcare and finance. It's about helping the both users and developers know why AI made a specific choice. As regulations tightens and trust becomes a priority, explanability is no longer optional. It's essential. This area can guide you into roles in AI comprehens auditing or trustworthy AI development. So that's it about explanable AI. Now let's dive into something closely connected that is ethical AI. So ethical AI is about making sure AI systems are fair, safe and respectful of privacy. From facial recognition to hiring algorithms, ethics ensure we are not harming the individuals or communities with technology. This topic is growing fast as companies are held accountable for their AI tools. If you're passionate about responsible innovation, you could work as AI ethics adviser or policy consultant. So that's it about ethical AI. Now let's move on to the next important skill that you need to learn and that is AI languages. Learning programming languages like Python, R or Julia gives you a power to build your own AI models. It's not just about coding. It's about creating intelligent system from scratch. The foundational skill is what every AI developer needs in their toolkit. So if you are looking to become hands-on AI creator or ML engineer, this is where it begins. So that's it about AI languages. Now let's take a step further with our next skill that is coding in AI. Now imagine helping AI to write your code. That's what tools like GitHub copilot do. They suggest code detect bugs and boost your productivity. This is a gamecher especially for beginners or solo developers. AI assisted coding lets you build faster, smarter and with fewer errors. And as a bonus, it makes you more efficient developer whether you are freelancing or working in teams. So next up, the skill we have is AI content creation. AI now writes blog, scripts, social post, generates design, and even composers music. It's transforming how brands and creators deliver the content at scale. If you're into marketing, media or education, the skill is a gold mine. Mastering it can help you to work as a AI powered content creator, editor, or creative strategist. So now that's it about AI in content creation. Now let's talk about how AI is boosting in sales. That means AI in sales. So AI is helping businesses to find the right leads, send personalized pitches and automate follow-ups. This is transforming traditional sales team into smart datadriven machines. Sales tools powered by AI helps to improve the conversation rates and customer satisfaction. So if you are into business or marketing, this can help you to grow into roles like sales analyst, CRM expert or even AIdriven growth hacker. So that's it about AI in sales. Now let's move on to the next skill that you need to master which is AI product management. So AI projects needs more than just code. They need planning, ethics and the right team. This skill combines leadership with tech knowledge ensuring AI system are deployed correctly. It's crucial in organization building large scale smart systems. You can lead cross functional teams as AI project manager or even become a strategic product owner. So we are almost there now. So now let's look at the next skill that is AI business strategy implementation. This skill focuses on where, how and why the businesses should use AI. It's all about maximizing the impact of aligning AI with their goals. Whether it's streamlining operations or creating new revenue streams, this skill helps decision makers act smarter. And for you, it's a gateway to leadership roles like AI strategist, innovation manager or transformation lead. And finally, we are at our last skill that is AI in cyber security. So AI is helping spot cyber threats in real time, block attacks before they happen, and secure sensitive data. It's essential in today's digital world where threats are evolving faster than ever. Companies are now relying on AI to protect networks, users and their reputation. If you're into security and AI, you can dive into the roles like thread analyst, AI based security specialist or ethical hacker using AI tools. So yeah, AI is changing the game in almost every industry and the right skill can open up amazing opportunities from generative AI to AI in cyber security. Each of these area offers a unique path to grow and stand out. So, which AI skills are you planning to learn first? Let me know in the comment box below. You've applied to dozens, maybe even hundreds of jobs, but the interview calls just never come. Sounds familiar? You're not alone. Many job seekers face the same problem. And the reason isn't always a lack of qualifications. The real culprit, application tracking system, or ATS. Companies use ATS software to filter through thousands of

### [9:43:33](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=35013s) Build AI Applicant Tracking System (ATS)

applications. If your resume isn't optimized for these systems, it might never even reach a human recruiter. That's where an ATS rum tracker comes in. This tool scans your resume just like an ATS would, highlighting formatting issues, missing keywords, and other factors that could be holding you back. Stop letting your resume get lost in the system. Optimize it with an ATS rum tracker and take control of your job search today. In this video, we'll be creating a powerful ATS ré tracker that not only analyzes your resume but also identifies issues and provides AI powered suggestion for improvement using Google's generative AI. So stay tuned till the end. But before we begin, please like, share and subscribe. Remember to hit the bell icon to stay updated on the latest tech content on Eda's YouTube channel. Eda's generative AI course masters program helps you master the principles of generative AI and implement them in real world applications. This course includes training on artificial intelligence, Python programming, data science, natural language processing, generative AI, prompt engineering, chat GPT and more. This course includes five plus hands-on projects to help you apply what you learn in real world scenarios. The curriculum is meticulously designed by industry experts based on an analysis of 5,000 plus global job description. Edica's generative AI and ML prograduate program designed by industry experts prepares you for roles like senior machine learning engineer and AI research scientist. With nine plus projects, 15 plus use cases and 100 plus hands-on labs. The program covers key concepts like Python, machine learning, deep learning, text mining, and NLP to strengthen your AI and ML skills. All right, let's get started. What exactly is an ATS and why does it matter? An ATS or applicant tracking system is a software used by companies to filter and rank résumés before they reach a human recruiter. It scans for keywords, formatting, and structure, rejecting résumés that don't meet its criteria. Common issues like missing keywords, complex formatting or improper file types can get your resume discarded instantly. That's why ATS optimization is crucial. It ensures that your resume gets past the filters and into the hands of hiring managers. Now that you know what ATS is, have you ever wondered how an ATS processes resume in the back end? Here's how it works. When you submit your resume, the ATS parses the content, extracting key details like your name, contact info, skills, and work experience. It then compares your resume against the job description, looking for relevant keywords and formatting that matches the role. Next, it scores your resume based on keywords relevance, structure, and readability. If your resume doesn't meet the systems criteria, whether due to missing keywords, complex formatting, or incorrect file types, it might get filtered out before a recruiter ever sees it. That's why optimizing your resume for ATS is so important. By structuring it correctly, including the right keywords, you can boost your chances of getting past the filters and into the hands of hiring managers. Now, the question is, will companies keep using ATS tracker in the future? The answer is a resounding yes. The future of ATS tracker is evolving fast. As hiring becomes more datadriven, companies are relying on ATS software more than ever to streamline recruitment. With AI and machine learning advancing, ATS systems are getting smarter, analyzing not just keywords but also context, experience, and even writing style. Will companies continue using ATS? Absolutely. As job applications increases, businesses need efficient ways to filter top talent. For job seekers, this means ATS optimization isn't just a trend. It's becoming a necessity. Adapting to these changes can give you a real edge in landing interviews and securing your dream job. Now, the part which you're desperately waiting for, our own ATS. So, for that, open your VS Code or any code editor and let's get started. Let's start by creating our ATS Rumé tracker project. First, we need to set up a condor environment to manage our project dependencies. To do this, we open the terminal and run the following command. We'll write condre p vnv python equals to 3. 10 - y. So here, corner creates a new environment. First, we'll need to import all the required libraries. These will help us handle PDF processing, environment variables, image conversion, and interactions with Google's Gemini AI model. We'll write import B 64 and import IO. These are used for encoding and decoding the résé PDF into an image format for processing. Then from env import load envo_env. This will help us load API key securely from av file. Then we'll write import stream as ST which is used to build the website based ATS tracking system. Then import OS and from PIL import image and then import PDF to image. This will convert the PDF files into images and then import Google generative AI as Genai which connects the Google's Gemini AI for ré analysis. Next, we'll configure the Gemini AI API by retrieving the API key from environment variables. We'll write Genai dot configure in the bracket API key equals to OS. get envi os. get env fetches the API key from the env file to ensure security and genai. configure initializes the connection to Gemini AI. Now we'll define a function that takes user input, the extracted PDF content and a prompt that sends it to Gemini AI for analysis. We'll write defaf then get_jemini response in the bracket input comma pdf content comma prompt then model equals to genai generative model in the bracket gem 1. 5 flash response equals to model generate content in the bracket input pdf content zero comma prompt and then return response. ext genai generative model calls the latest Gemini flash vision model. generate content feeds job description ré content and a prompt to Gemini AI return response. ext extracts and returns the AI generated response. Now let's create a function to process the uploaded PDF. We'll write defaf input PDF setup in the bracket uploaded file. Then if uploaded file is not none, this is to convert the PDF to image. Then we'll write image equals to PDF to image dot convert from bytes in the bracket uploaded file read read. Then first page equals to images in the bracket zero. This is to convert to bytes. Image byte array equals to io do. tes io. Then first page dot save in the bracket image byte array, format equals to jpeg. Then image by array equals to image by array dot get value. Then PDF paths equals to square bracket and then curly bracket mime type colon image/jpeg and then data colon b 64. b64 B64 encode in the bracket image by array dot decode. This is to encode to B64. Then we'll close both the brackets and then return PDF parts. Else raise file not found in the bracket no file uploaded. The PDF to image. con convert converts the PDF into images. First underscore page selects the first page which usually contains the main rumé content. IO. tes prepares the image for conversion and first page. saves saves the ré as JPEG. B 64. b64 encode the image to B 64 which is required for AI processing. This function converts the uploaded PDF resume into an image format, encodes it in B 64 and prepares it for processing by Gemini AI. Let's set a page configuration first. For that we'll write st. E page_config in the bracket page title equals to ATS rum expert. And let's keep the layout as centered. Then let's add the page header. We'll write st dom markdown in the bracket h1 style equals to text align center colon color add the color of your choice and then at tracking system and close the h1 tag then unsaved_all allow html equals to true then in the next line st. mmarkdown in the bracket enter job description below. Then we'll Enter job description below. Then we'll put the job description input. We'll write input text equals to st. ext area in the bracket job description key equals to input, height equals to 200. St. page configuration sets the page title for streaml app. Sta provides a text input area where users can paste a job description. Now if a file is uploaded we notify the user that the PDF was successfully received. For that we'll be creating a checking PDF upload status. For that we'll write uploaded file equals to st. file uploaded in the bracket upload your resume. PDF only comma type equals to PDF then if uploaded file is not none colon st. success in the bracket rumé uploaded successfully. If uploaded file is not null, checks if a resume was uploaded and st. uess displays a success message. Now let's create buttons for different ATS functions. We will be adding three buttons that lets user analyze the resume, improve skills or check the ATS match percentage. We'll write first button column 1, col 2 equals to st columns. Then with column 1 we'll write submit 1 equals to st. button in the bracket tell me about the resume. Then with column 2 we'll write submit to equals to st. button in the bracket how can I improve my skills. ST dotbutton creates the buttons for three functionalities. Tell me about the resume general evaluation. How can I improve my skills? Suggest the ways to improve. Now we'll be defining the prompt templates. We'll define two different prompt templates for Gemini AI. One for general rum evaluation and another one for percentage based ATS coding. I have already created the prompt so you can use your own prompt using Genai models like GPT or Gemini. The input prompt one makes Gemini AI act like an HR manager to evaluate the resume. and input prompt loop makes Gemini AI act as an ATS scanner providing a match percentage and missing keywords. Now let's handle button clicks and calling Gemini AI. When a user clicks a button we check if a resume is uploaded, process it and send it to Gemini AI and displays the response. For that we'll write if submit one colon if uploaded file is not none. Then PDF content equals to input PDF setup in the bracket uploaded file response equals to get Gemini response in the bracket input prompt 1 comma PDF content comma input text text then we'll write ST dots subheader in the bracket the responses is and then stright write in the bracket response else st. right please upload the resume then lfs submit to colon if uploaded file is not none then pdf content equals to input pdf setup in the bracket uploaded file then response equals to get gemini response in the bracket input prompt to comma pdf content comma input text then ST do. t subheader in the bracket the responses is then st. tright in the bracket response else st. ight in the bracket please upload the resume. If tell me about the resume is clicked Gemini AI evaluates the resume. If how can I improve my skills is clicked. Gemini AI suggests the improvements. If no resume is uploaded it prompts the user to upload one first. So now is the testing time. We will open the terminal and write stream run app. py and it's running. First let's upload job description from LinkedIn. Now let us upload our resume and check. And boom, it's giving everything we need as you can see the description and suggestions using generative AI. This is the best example of an ATS agent using Genai. So in conclusion, this code effectively builds an AI powered ATS rum tracker with Streamlit and Gemini AI. It allows users to upload resume, get professional AIdriven feedback, see ATS match percentage, and improve their resume with AI insights. Let's get started with our basic level questions. So first we have what is the difference between AI, machine learning and deep learning? I'm sure all of you have this question at the top of your mind because there's a huge confusion between AI, machine learning and deep learning. So let's try to understand how they are different. Now first of all AI came into existence at around 1950s. All

### [9:59:07](https://www.youtube.com/watch?v=cG1W_AgYRRo&t=35947s) Artificial Intelligence Interview Questions & Answers

right, this was followed by machine learning and then deep learning was introduced. Now AI basically represents simulated intelligence in machines which means that it represents any robot or any machine that can mimic the behavior of a human being. Machine learning on the other hand is a practice of getting machines to make decisions without being explicitly programmed to do so. Now if you don't program a machine, how are you going to let it make decisions? Now the way machines learn is through data. So the most important thing in machine learning is the data. All right? You're going to train machines using data so that they can make their own decisions. Next we have deep learning. Now deep learning is basically the process of using artificial neural networks to solve complex problems. So basically you can think of deep learning as a field that tries to mimic our brain. Okay. So how we have neural networks in our brain that's exactly how deep learning uses the concepts of artificial neural networks in order to solve problems. Now AI is a subset of data science. So guys first of all data science is the process of deriving useful insights from data. All right? It's a process of extracting information from data that will help you solve problems. So AI is a subset of data science. Now on the other hand, machine learning is a subset of AI and data science because machine learning comes after AI. So basically in AI you're going to make use of techniques and concepts of machine learning in order to solve problems. Then we have deep learning. So it's sort of a hierarchy. First we have data science, then we have AI, then we have machine learning and then we have deep learning. Deep learning is a subset of machine learning, AI and data science. Okay, I hope this is clear. Now, the main aim of artificial intelligence is to build machines in such a way that they're capable of thinking like human beings. All right, so basically they must be able to mimic the behavior of a human being. Now the aim of machine learning on the other hand is to make machines learn by providing them a lot of data. Okay, once you make a machine learn through data, it's going to be able to solve complex problems and find solutions. Now, the aim of deep learning is to build neural networks that are able to solve more advanced and complex problems. Okay, now like I mentioned, deep learning is like an artificial brain. All right, you're basically building an artificial brain that is able to think exactly like how we do. Okay, that's what deep learning is. It's a little more advanced than machine learning. Now guys in short AI, machine learning and deep learning are used to solve problems through data. So basically AI makes use of techniques and methods of machine learning and deep learning to solve problems or to draw useful insights from data. So this is the difference between AI, machine learning and deep learning. I hope all of you are clear with this. Now let's look at our question number two. The question is what is artificial intelligence? give an example of where AI is used on a daily basis. So there are a lot of definitions of AI on the internet. A few of them are artificial intelligence is an area of computer science that emphasizes on the creation of intelligent machines that work and react like humans. So like I said basically a machine that is able to mimic the behavior of a human being is known as artificial intelligence. Another such definition is the capability of a machine to imitate the intelligent human behavior. All right. So artificial intelligence in short is basically a machine that we created who can act and think like a human being. Now where do you think AI is used on a daily basis? There are tons of applications that make use of AI. But one of the most popular applications of AI is the Google search engine. Now if you just open up Google search and you start typing anything immediately you get recommendations. These recommendations you derive by using machine learning algorithms, by using deep neural networks and so on. So on the top of my head, the most general example of AI is the Google search engine. All of us use Google search engine and we know how quick it is with its results and how relevant searches it gives us. All this is because of AI. All right. Now let's look at our next question which states what are the different types of AI. Now a lot of people might not be aware of this because there are a couple of types of AI or machines which are hypothetical. Okay, we haven't actually implemented these machines in the real world. We just have a theoretical definition of these. Okay, let's look at what I'm talking about. So first of all, we have reactive machines AI. Now these machines are all based on the present actions. Okay, they have no memory or they have no concept of storing memory so that they can learn from their experience. They just react at the moment. Okay, so they're based on present actions and they cannot use previous experiences to form current decisions and update their memory. Then we have limited memory AI. Now this type of AI has some temporary storage of memory in it. Now if we have some memory stored in a machine, we know that it can look back into the memory and it can try to make decisions based on previous or past experiences. So limited memory AI makes use of that concept. We have temporary memory here. We do not have permanent memory. But one of the top applications of limited memory AI is the self-driving cars. I'm sure all of you have heard of self-driving cars. They make use of limited memory AI in order to run. Then we have theory of mind AI. Now like I mentioned earlier, there are a couple of types of artificial intelligent machines which are not actually implemented in the real world. An example of that is theory of mind AI. Okay, this is basically an advanced machine which will have the ability to understand emotions, people and other things in the real world. We might have come close to this type of AI but we haven't actually developed something that can understand emotions. Next we have self-aware AI. Now this is another such example of a machine that is not built in the real world. This basically includes any machine that has consciousness or that can react just like a human being. Okay. So basically a machine that can take own decisions that can form own conclusions and these are machines that have the capability of making their own decisions without any human intervention. Now this kind of AI is not developed like I mentioned because it's going to take up a lot of resources and we still haven't reached that peak of evolution yet. Then we have artificial narrow intelligence. Now these are the general purpose AI that we see on a daily basis. I'm sure all of you have used Google Assistants, you've used Siri. All of that comes under artificial narrow intelligence. After that we have artificial general intelligence. Now these are a little more advanced than the artificial narrow intelligence. Then we have artificial superhuman intelligence. Now these are one of the most advanced type of AIs that are there. Now like I mentioned earlier, there are a couple of types of are not actually implemented in the real world. An example of that is artificial superhuman intelligence. So guys, these were the different types of AI. Now let's look at the next question which says explain the different domains of artificial intelligence. Now AI covers a lot of different domains starting with machine learning. Okay. So machine learning like I mentioned earlier is the science of getting computers to act by feeding them data and by letting them learn a few tricks on their own without being programmed to do so. Okay? So you're not explicitly programming the machine. Instead you're feeding it a lot of data so that it understands the data and it makes its own decisions. Then we have neural networks. Now neural networks are basically a set of algorithms or you can say a set of techniques which are modeled in accordance with the human brain. Okay. Like I mentioned earlier, deep learning or neural networks is almost the same thing. Deep learning makes use of neural networks in order to solve complex problems. Now we have robotics. Now robotics is a subset of AI which includes different branches and applications of robots. These robots are basically artificial agents which act in a real world environment. Okay. So an AI robot works by manipulating the objects in its surrounding by perceiving, moving and taking relevant actions. Then we have expert systems. Now an expert system is basically a computer system that mimics the decision-m ability of a human being. Now I know all of these domains sound very similar but they have a very different approach with which they solve a problem. All right that's the main difference between these domains. Next we have fuzzy logic systems. Now traditional systems usually give out output in the form of binary. So usually if you feed something to a machine it's always in the binary form. The output is also usually in the form of yes no true false and so on. But when it comes to fuzzy logic, it tries to give an output in the form of degrees of truth. Okay. So it's very different when compared to the traditional computer systems or the traditional programs. Next, we have natural language processing. Now this is a field of AI that analyzes natural human language to derive useful insights so that it can solve problems. Now NLP is used majorly in social media platforms. So Twitter sentimental analysis is done via NLP. Even Facebook uses NLP in a lot of things. All right. So NLP, fuzzy logic, expert systems, machine learning, neural networks, and robotics are the different domains of AI. I hope all of you are clear with the domains. Now let's look at our next question. Okay. So how is machine learning related to artificial intelligence? There is a huge confusion between machine learning and AI. A lot of people tend to believe that AI and machine learning is one and the same thing. All right. I would say that you cannot compare AI and machine learning because machine learning is a subset of AI. So basically AI makes use of machine learning algorithms and machine learning concepts to solve problems. That's the basic difference or that is where the confusion ends. Machine learning is a technique which is implemented in artificial intelligence in order to solve problems. I hope this is clear. Now let's look at what are the different types of machine learning. So there are three You have supervised, unsupervised and reinforcement learning. Now supervised learning is the type of learning in which the machine learns by using labeled data. Now to make you understand, let's look at an example. Okay, let's say that you've input images of apples and oranges to your machine and you've labeled them. You've told the machine like listen this is the apple. This is an orange and the output should also look like this. Okay? So you're labeling the input as apple and an orange and then you're asking the machine to output an apple and an orange. But when it comes to unsupervised learning, you're not going to label them. You're just going to give them images of apple and oranges and it has to figure out on its own. It has to try and understand the difference between apple and oranges. Try and understand how they look different or how they have a different color. So basically in unsupervised learning, you don't have a label data set. Okay? You're going to give it an unlabelled data set and you're going to ask it to find out and classify which is an apple and which is an orange. Okay, that's the difference between supervised and unsupervised. Now, reinforcement learning is comparatively different. Let's imagine that you were put off in an island. Okay, let's say that you were left in an isolated island. What would you do? Now, initially we'll all panic and we won't know what to do. But after a point, you'll start exploring the island. You'll start adapting to the change in the climate conditions. you start looking for food and then you'll try and understand which food is right for you and which food is wrong for you. You know, you learn from your experience. So in reinforcement learning basically an agent interacts with its environment by producing actions and discovers errors or rewards. Now the type of problems that supervised learning is used to solve is regression and classification. When it comes to unsupervised, it is association and clustering. And in reinforcement learning, it solves reward based problems. The type of data for supervised learning is labeled data. For unsupervised it is unlabeled and for reinforcement it is no predefined data. Now when I say no predefined data I mean that the reinforcement learning agent has to start collecting the data. So basically in reinforcement learning from data collection to model evaluation it does everything. In terms of training supervised learning provides external supervision in the form of label data set. In unsupervised learning there's no supervision. That's why it's called unsupervised learning. Again, in reinforcement learning, there's no supervision at all. The agent has to figure everything out. Now, how supervised learning works is you map the labeled input to the known output. So, basically, you teach the machine like you tell it that this is the input and this has to be the output. When it comes to unsupervised learning, you just provide data to the machine and it has to understand patterns and it has to discover the output. Now in reinforcement learning, it has to follow the trial and error method. Okay, there's no particular way in which the agent learns. It just has to explore the environment, try out a few things and learn from that experience. Popular supervised learning algorithms include linear regression, logistic regression. For unsupervised we have key means and for reinforcement learning, we have Q-learning. So guys, these were the different types of machine learning and I also discussed the difference between the three. Now let's move on and look at our next question which is what is Q-learning? In the previous slide itself I told you that a type of reinforcement learning algorithm is Q-learning. So basically here what happens is an agent tries to learn the optimal policy from its past experience with the environment. The past experience of an agent are a sequence of action state and rewards. So what happens is first of all you take an agent and you put it in state zero. Okay, let's say there's some state known as state zero. Now this agent is going to perform some action a not on performing this action it is going to get a reward r1 and if it gets a reward r1 then it's going to move to state s1 but in case the action is wrong then it's going to get a negative reward as in some points are going to be reduced. So guys, think of Q learning as a game. You're in state zero and then you do some action and either you get a reward and go to the next state or else you lose and you go back to the same state. So until you learn you're going to be in the same state. But if you keep learning and if you keep receiving positive rewards then you're going to move on to state one and similarly you 2, three and so on. This is what Q-learning is about. Now the next question is what is deep learning? Now deep learning like I mentioned earlier basically mimics the way our brain works. Okay, it learns from experience. Now the main concept behind deep learning is neural networks. In our brain also we have neural networks. So what deep learning tries to do is it tries to use the concept of neural networks in order to solve complex problems. So basically we're trying to mimic our brain. Any deep neural network will have three types of layers. The first is the input layer. Now this layer will basically receive all the input and it will forward them to the hidden layer. Now in the hidden layer all the analysis and the computation takes place. All right. Once the computation is done the result is transferred to the output layer. Now there can be any number of hidden layers depending on the type of problem you're trying to solve. Then we have the output layer. So basically this layer is responsible for transferring the information from the neural network to the outside world. So it's as simple as that. It's pretty obvious input layer will take in the input. Hidden layer will perform the computations and the output layer will give out the output. This is a small explanation of what deep learning is. Now of course this is much more complex than this but in short this is exactly what deep learning is. Now let's look at our next question which is explain how deep learning works. So basically deep learning is a concept based on something known as neuron. Okay, neuron is a basic unit of the brain. Inspired from this neuron, they came up with something known as perceptrons or artificial neurons. Now in this image on the left hand side, you can see that there is something known as dendrite. These are the modules which receive the input. It basically receives all the signals that we send to our brain. Okay, similar to the droid are the input layer in our artificial neural networks. Now in the previous slide we discussed that the input layer takes in all the input from the outside. That's exactly what a dentride does. So basically a perceptron receives multiple inputs. It applies various transformations and functions and then it provides an output. So basically guys just like how our brain contains multiple connected neurons called neural networks we also have a network of artificial neurons called perceptrons to form a deep neural network. So basically an artificial neuron or a perceptron it models a neuron which has a set of inputs each of which is assigned some specific weight. Okay. All of these inputs will have a specific weight and the neuron will compute some function on these weighted inputs and give you the output. So the neuron will basically perform analysis and all of that on these weighted inputs to give you some output. This is the basic concept of deep learning. So there are inputs which have some weight on it and these inputs are then formulated and analyzed in order to give you an output. Now let's look at our next question which is explain the commonly used artificial neural networks. Now this is a very theoretical question because in order to make you understand how each of them work will take a lot of time. Okay. So I'm just going to briefly tell you what each of these networks are and what they do. Now feed forward neural network is the most basic kind of artificial neural network. So basically the feed forward neural network is unidirectional. The data passes through the input nodes and leaves through the output nodes. In feed forward neural network usually the number of hidden layers depends on the complexity of the problem. Coming to convolutional neural networks here basically the input features are taken in small sets. Okay or they're taken in batches. This will help the network remember better because you're feeding batches of images or input to the neural network. Now this type of neural network is mainly used for signal and image processing. Next we have recurrent neural networks. These are also known as long short-term memory networks. So this basically works on the principle of feeding the output of a layer back into the input layer in order to predict the outcomes. Okay, this way it's more precise and it is a little more complex when compared to convolutional networks. Now, one main important point of recurrent neural networks is that they have something known as memory. So, basically each neuron will have some information or some memory stored in them so that they can use this memory in order to take actions in the future. So, they have some experiences stored in the form of memory so that they can make their decisions based on previous actions. Now finally we have autoenccoders. Now autoenccoders are mainly used in dimensionality reduction for learning generative models. Okay. And one more important thing about autoenccoders is that the number of units in the output layer and the input layer is the same. This is because the output layer has to reconstruct its own inputs. So these were the different types of artificial neural networks. Now let's look at our next question which is what are Beijian networks? Okay. So a basian network is a statistical model that represents a set of variables and their conditional dependencies in the form of a directed ascyclic graph. Now basically on the occurrence of any event, a Bayian network can be used to predict the likelihood that any one of several possible known causes was a contributing factor. An example of this is a Bayian network could be used to study the relationship between diseases and symptoms. So given a set of symptoms, the Beijian network can be used to find out the probability of the presence of any diseases. All right. So the next question is explain the assessment that is used to test the intelligence of a machine. Now guys, this is a very common question and it is sort of a general knowledge based question. All right, I'm hoping that most of you know the answer to this. So let's look at what the answer is. I'm not sure how many of you have heard of Alan Turing. So Alan Turing was the one who came up with the Turing test. Now this test is basically to determine whether or not a computer is capable of thinking like a human being. So if a machine or if a computer passes this exam, it means that machine is capable of thinking like a human being. It means that it is successfully an artificial intelligent machine. Meaning that it can make its own decisions and interpret data and form their own formulations or form their own conclusions about the data. Now, sadly, I don't think there are a lot of machines that have passed the Turing test. In fact, I'm not sure if there is any machine that's passed the Turing test as of now. But in the near future, I'm sure that we'll see machines who are most smarter than human beings and who have passed this test. Now, for a machine, it might be very easy to do computations, but hard for a machine to just get up and walk around. All right? The simple things that us humans can do is very complicated for a machine. They can do computations which we can do in probably a year. They can do those computations in maybe a week or less than a week. But doing simple things such as walking up to the fridge or kitchen is very hard for the machines. So to achieve that level of intelligence, we're going to take a while. But in the near future, I'm sure we'll see machines which are way more capable than human beings. Now let's move on to our next level. Now here I'll basically be discussing intermediate level artificial intelligence questions. So let's look at the first question. All right. The first question is how does reinforcement learning work? Explain with an example. Okay. So first of all, reinforcement learning is a type of machine learning. We discussed about reinforcement learning earlier. Reinforcement learning is a type of machine learning wherein there's an agent and you put this agent in an unknown environment. All right. Now the agent has to figure out actions, what sort of actions it must take and how it's going to get rewards so that it can move from state zero to state one. It's sort of like a video game. If you're in a video game, let's say if you're playing Counter Strike, you're in level zero or state zero. Now, if you perform some action and if you get some rewards, you're going to move to state one. That's exactly how reinforcement learning works. If you perform the relevant actions and the correct actions, you're going to get a reward and you'll move on to the next state. But in case you perform a wrong action, you'll get negative rewards and you'll stay in the same state unless and until you don't learn. All right? So if you learn and achieve, then you'll move to the next state. So basically a reinforcement learning system will have two main components. It'll have an agent and an environment. Now the agent, I've been repetitively saying an agent. An agent is basically the reinforcement learning algorithm. It is the model. The model has to learn everything on its own. It has to collect data on its own. It has to draw useful insights on its own. Okay? You're not going to feed any predefined data to this reinforcement learning agent. All right? He has to figure out everything on its own. So, let's look at an example of Counter Strike. Okay? I'm not sure how many of you play the game, but yeah. What happens here is the reinforcement learning agent or the player one collects a state snot from the environment. Okay? So, let's suppose that you're playing Counter Strike and you're in state zero. Now you'll perform some action a not right initially it's going to be a random action. So obviously if you're put in an unknown environment your first action is going to be random correct because you don't know what's right wrong. So in your state zero you'll take an action a not this will result in a new state s1 and on achieving state s1 the agent will get a reward r1 okay from the environment. Now, in the case of Counterstrike games, if you've observed whenever you win a state or you pass a level, you're going to get some rewards. Maybe you'll get more weapons or you'll get more points. Okay, just like that in reinforcement learning problem, you'll get some reward R1. Okay, it's basically a plus point. You might get a negative reward or a positive reward based on the action that you take. Now, this loop will go on until the agent is dead or it reaches the destination. So in counter strike until you have failed the level you will keep playing the game right you'll keep moving from state one state two state three and so on or if you've reached the destination then it's the end game that's exactly how it works in reinforcement learning if the agent has explored the entire environment and reached the end state that's when the loop will end all right that's exactly how reinforcement learning works it is very similar to the games that we play all right it's very understandable now let's move on and discuss is the next question. So the next question is explain marov's decision process with an example. Now the solution for a reinforcement learning problem is achieved through the marov's decision process. This is basically a mathematical approach that maps the solution in reinforcement learning. Okay. So now to understand this there are a couple of parameters in a marov's decision process. They're going to be a set of actions called A. Okay, you can name them A. A set of states. There's going to be reward. policy and there's going to be value. To sum it up, what exactly happens in a marov's decision process is that the agent takes an action A to transition from the start state to the end state. Now, while doing so, the agent receives some reward R for each action that he takes. The series of actions taken by the agent will define a policy or an approach and the rewards collected will define the value. So the main goal in a marov's decision process is to maximize the rewards by choosing the most optimum policy. Meaning that you're going to choose the best path or the best solution in order to get the most number of rewards. Now in order to make you all understand this better, let's solve the shortest path problem by using Marov's decision process. I'm sure all of you have heard of shortest path problem. This was I think taught to us when we were in 11th or 12th. I'm not sure. Look at the diagram that is over here. This is basically a representation of our problem. Given this representation, our goal here is to find the shortest path between the node A and node D. All right, you can see nodes A, B, C, and D. We have to find the shortest path between node A and node D. Now the link between these two nodes has a number on it. Okay, for example, between A and C, you can see there's a number 15. Okay, this basically denotes the cost to traverse that edge. So if you want to go from A to C, you'll spend around 15 points. So our end goal here is to travel between node A and node D with minimal possible cost. We should travel between A to D in such a way that our cost is minimal. Now in this problem if you notice that we have a set of states. Okay these are denoted by the nodes A B C D. Now like I mentioned earlier a marker's decision process has a set of states. Similarly in this problem the set of states are A B C D. The action is to traverse from one node to the other. So going from A to B is basically an action. C is another D is another action and so on. Now reward is represented by the cost on each of these links and the policy is the path which is taken to reach the destination. So our aim here is to choose a policy that gets us to node D in the minimum cost possible. So how do you think you can solve this problem? All right, you can start off at node A and you can take baby steps to your destination. Now initially only the next possible node is visible to you. Like I mentioned earlier, the initial action taken in a reinforcement learning problem is always random. So at random, you'll choose any node. Let's say you take A to B. Now if you go from A to B, you can go B to D and you'll reach the destination. So policy is the path which is taken to reach the destination. All right? So it can go from A to B to D or you C A C B D. All right? Now it's up to you to figure out which is the shortest path. All right? You have to choose a path in such a way that the cost between A to D is minimized. So guys, this was a simple problem of how mark's decision process is used to solve the shortest path problem. Now let's move on and look at our next question. All right. Now the next question is explain reward maximization in reinforcement learning. So basically a reinforcement learning agent works based on the theory of reward maximization. Okay. Okay, in the previous question itself I told you that the main aim of reinforcement learning is to maximize the reward. So that's why a reinforcement learning agent must be trained in such a way that he takes the best action so that the reward is maximum. Okay, this is exactly what reward maximization means. He has to choose the best policy in such a way that the reward is maximum. Now let me explain this with a small game. So in the figure you can see a fox, you can see some meat and you can see a tiger. Now our reinforcement learning agent is the fox. His end goal is to eat the maximum amount of meat before being eaten by the tiger. Okay? So he has to explore around, eat the maximum number of meat that he can eat before the tiger kills him. Since the fox is a clever fellow, he eats the meat that is closer to him. Okay? So rather than eating the meat which is close to the tiger, he eats the meat which is only close to him. This is because the closer he gets to the tiger, the higher are his chances of getting killed. So as a result of this, the rewards near the tiger, even if they are bigger meat chunks, will be discounted. So because the fox is not going closer to the tiger and eating the meat chunks closer to the tiger, this reward will get discounted. Now I know yall are wondering what discounted is. Now this is done because of the uncertaintity factor that the tiger might kill the fox. So what is discounting of reward? Okay, how does it work? To understand this, we define a discount rate called gamma. Okay, this is a parameter and the value of gamma always ranges between 0 and one. So the smaller the gamma, the larger the discount and so on. So guys, this was reward maximization. So here basically the fox will try to get as much as meat chunks as he can and he'll also try to avoid getting killed because that will end the reinforcement learning loop. We also discuss the discounted factor. All right. Now that is not needed to understand reward maximization but I just thought I'll add on some extra info. Now let's look at the next question which is what is exploitation and exploration tradeoff. So basically exploration is like the name suggests it is about exploring and capturing more information about an environment. Now on the other hand exploitation is about using the already known exploited information to heighten the rewards. So consider the same example that we discussed in the previous question. Here the fox only eats the meat chunks which are close to him. Okay, he does not eat the bigger chunks because even though the bigger chunks would give him more rewards, it would get him killed. Okay, he does not go towards the tiger itself. Now, if the fox only focuses on the closest reward, he will never reach the big chunks of meat. Okay, this is what exploitation is. He's sticking only to the information that he knows and he's trying to get the most number of rewards from it. But if the fox decides to explore a bit, it can find the bigger rewards. Okay, the bigger rewards are basically the big chunks of meat which are near the tiger. And this is exactly what exploration is. Okay, exploitation is about using the already known information to heighten your rewards. Exploration on the other hand is about exploring and capturing more information about an environment. All right, so that was about exploitation and exploration. Now let's move on to our next question. So this is a difference question which ask the difference between parametric and non-parametric models. So a parametric models basically uses a fixed number of parameters to build the model. Now first of all guys what are parameters? Now parameters are basically predictor variables that are used to build a machine learning model or build any predictive analytics model. Now that we know what parameters are, let's try to understand the difference between a parametric and a nonparametric model. Now parametric model basically uses a fixed number of parameters to build the model. A nonparametric model uses flexible number of parameters to build the model. When it comes to a parametric model, the assumptions about the data are very strong. In a nonparametric model, there are fewer assumptions about the data. A parametric model has the fixed number of parameters. Everything is defined over here. So the computation is very fast. Okay, you know what sort of variables you'll need to predict the outcome. Okay, you have a defined set of variables or predictor variables that will compute your outcome. So that's why the computation is a bit faster. When you compare to nonparametric models, there are a lot of parameters taken into account. Now when it comes to a nonparametric model, you do not have a fixed number of parameters. All right? predictor variables that will help you get to the outcome. So the computation is a bit slower. Now parametric models require lesser data and nonparametric require more data. Example of parametric models include logistic regression and naive bias. And for nonparametric models we have KN& N and decision tree models. Now logistic and Ibias models are very stern models because they have a fixed number of parameters or a fixed number of predictor variables and they will give you an immediate output. Okay, when it comes to nonparametric models like decision tree models and KN& N, you might even observe a little bit of overfitting. Okay, this happens because you have a fewer number of assumptions about the data and also because your parameters are not fixed. Now that's not the reason for overfitting but it's seen that in some of the nonparametric models overfitting occurs more often. Now let's discuss the next question which is what is the difference between hyperparameters and model parameters. Now model parameters are the predictor variables that I was speaking about earlier. Hyperparameters let's discuss what they are. Okay. Model parameters are the features of training data that will learn on its own during training. Whereas model hyperparameters are the parameters that determine the training process. Now let's say that you want to determine the height of an individual depending on his weight. The height and weight will become your model parameters. But your hyperparameter is basically the learning rate. It's the rate at which your model is going to learn this correlation between the height and the weight. So this is the difference between model parameters and hyperparameters. Model parameters are the ones that you find in your data. These are all the variables that you use to predict your outcomes. Hyperparameters will define your training process. There is a huge difference between model and hyperparameters. Another difference is that they are internal to the model and their value can be estimated from the data. Hyperparameters are external to the model and their value cannot be estimated from data. Now like I said, model parameters are derived from your data itself. Okay, these are the parameters that are there in your data. Hyperparameters are the ones that you define in order to train your entire data. So that is the difference between hyperparameters and model parameters. So the next question is what are hyperparameters in deep neural networks. So guys, like I mentioned in the previous example, hyperparameters are variables such as the learning rate. This will define how your entire data training process goes. For those of you who don't know, in order to build a model, you first need to train the model and then you need to test it. Okay. Now, while training the model, uh you're going to make the model learn a lot of things. You're going to give it a lot of data. It has to figure out relations between various variables and how these variables are affecting the output. All of this training will depend on a few variables such as the learning rate. Okay, these are basically called hyperparameters in deep neural networks. So these parameters will define the number of hidden layers that are present in the network. Okay. And more the number of hidden layers, the more accurate your network is going to be. Whereas if you have lesser number of units, you may cause underfitting in your data. Underfitting will also result in inaccurate predictions. So that's why you need to make sure that the number of hidden units in your hidden layers are perfect, are ideal. Okay? And this is determined by your learning rate or by your hyperparameters. Not specifically but by the number of hyperparameters you have. These number of hidden layers are determined by the hyperparameters. Okay, that's why hyperparameters are very important in deep neural networks. I hope you all are clear with this. Now let's look at our next question which is explain the different algorithms used for hyperparameter optimization. We'll discuss the three methods which are grid search, random search and basian optimization. Okay. Now grid search basically will train the network only on the two sets of hyperparameters which are learning rate and the number of layers. Okay. So it's going to use every combination of these two sets in order to train the network. Okay. After that it'll evaluate the efficiency of the model by using the cross validation techniques. Cross validation is the best improvement method. Okay. It's the best way to check if your model is optimal or not. Then we have random search. Now this will randomly select samples and it will evaluate sets for a particular probability distribution. Now in random search there is no fixed number of hyperparameters that it's going to evaluate. So it'll randomly select a set of hyperparameters. Okay? For example, now instead of checking your entire sample or your entire let's say that you have 10,000 samples. Instead of checking all of these sample, it'll randomly select 100 parameters that can be checked. Okay. And then it'll use this to build the model. After that, we have Beijian optimization. Okay. Now, Beijian optimization basically uses something known as the gshian process. Basically, the gian process will help in model tuning. Okay. Model tuning or you can also say parameter tuning. So, parameter tuning will help you tweak the parameters a little bit in order to improve the efficiency of the model. Okay. Okay, so version optimization basically makes use of the gshian process which will provide model tuning to your algorithm and thus improve the efficiency. Now guys, the one of the most important ways to improve the efficiency of a model is by hyperparameter optimization. Okay, if you're tuning your hyperparameters and if you're trying to check in which way these hyperparameters will give you the most accurate outcome, that's when your result will be very good. Okay, so that's the best way to improve the efficiency of the model. All right. The next question is how does data overfitting occur and how can it be fixed? Now guys, this is a very common question in a machine learning or in an artificial intelligence interview. Okay, people expect you to understand what data overfitting is and how you can fix these problems. Okay, because data overfitting occurs pretty often especially if you're using decision trees or if you're using random forest. Okay, random forest actually reduces overfitting. But sometimes with these complex models, you can get data overfitting. Now to answer this question, first of all, let's understand what overfitting really is. So overfitting occurs when a machine learning algorithm captures a noise of the data. Okay, this causes an algorithm to show low bias but high variance in the outcome. Now what overfitting really means is you have trained your model way too many times on the training data. Okay, so basically the model has memorized the training data. It has memorized the noise in the training data. Okay, so if you feed new data to the model during the testing stage, it will not be able to recognize the noise or any sort of correlation in that data. Okay, that's why it won't be able to get a proper outcome. Okay, that's when overfitting happens. you have trained the model way too much with the training data and this has resulted in inaccurate outcome during the testing phase. Okay, that's what overfitting is about. Now, how do you avoid overfitting? First of all is cross validation. Now, before this also I mentioned that cross validation is the best way to obtain a more optimal solution. Now the general idea behind cross validation is to split the training data in order to generate multiple mini train test splits. Okay, these splits can be used to tune your model. Okay, so you're basically splitting the training data in such a way that you know the model does not just use the entire training data and memorize it. Instead, it's going to check the different sets in the training data and testing data and learn from it. Okay, so cross validation is one of the best ways to prevent overfitting. Another method to prevent overfitting is by training the model with more data. So feeding more data to the machine learning model will help in better analysis and classification. However, this method is not always going to work. But yeah, this is also one of the ways to prevent overfitting. Okay. Next, we have removing features. Now many times the data set contains irrelevant features or predictor variables which are not needed for analysis. Such features will only increase the complexity of the model. Therefore, it'll lead to possibilities of data overfitting. Okay. So if you have irrelevant data like for example if you're trying to understand the weight of a person depending on its height and you have another variable let's say you have a variable like the name of the person. Okay. Now person is not relevant in understanding the height of an individual. So if you have irrelevant predictor variables then it'll just increase the complexity of the model because you have an extra irrelevant variable. All right. This will only increase the complexity of the model. It will not help the model in any way. So make sure you remove irrelevant features or you remove redundant features. Okay. The next method is early stopping. Now a machine learning model is trained iteratively. This will allow us to check how well each iteration of the model performs. But after a certain number of iterations, the model's performance starts to saturate. Further training will only result in overfitting. Okay? So like I mentioned, if you train the model with the same data and you make the model memorize the data, then it'll just saturate it. It won't be able to predict any outcomes after a point. What you have to do is you have to understand where you need to stop training the model. So this can be achieved by using a mechanism known as early stopping. So at this point you know that you have to stop training the model because this might result in overfitting. Now regularization is one of the most common ways to prevent overfitting. Regularization can be done in any number of ways. Okay. The method will always depend on the type of learner you're implementing. For example, pruning is performed on decision trees. Now pruning is a type of regularization. Similarly, the dropout technique can be used on neural networks and also there are other methods like parameter tuning which can help to solve overfitting. The next way to prevent overfitting is by using ensemble models. Now ensemble learning is a technique that is used to create multiple machine learning models which are then combined to produce more accurate results. So basically if you have one problem statement in machine learning you're going to use like five to 10 different models and then you're going to calculate the accuracy depending on the average of the result from each of these models. By this way you will reduce overfitting. Now ensemble models is one of the best ways to prevent overfitting. An example is the random forest. Random forest uses ensemble of decision trees to make more accurate predictions and to avoid overfitting. So basically random forest is a set of decision trees. So here you're going to train the model by using a set of decision trees and this way you'll have different data sets and on each of these data sets you'll have a different decision tree model. Okay, this will reduce overfitting to a very large extent. That's why in most of the cases when you see a decision tree having overfitting issues you'll be asked to use random forest. So guys those were the different ways to prevent overfitting. Now the next question is mention a technique that helps to avoid overfitting in a neural network. Now the most famous method to prevent overfitting in neural networks is dropout technique. Okay. Now dropout is a type of regularization technique which is used to avoid overfitting in a neural network. So here what you do is you randomly select neurons and you drop them during the training phase. Right? So the dropout value also has to be chosen very carefully because a higher dropout value will result in underarning by the network. So if you're dropping out too many predictor variables or neurons in a neural network then the model will not learn enough okay because there's not enough predictor variables and not enough neurons. But if you have too much of a low rate for a dropout value then this might have a very minimal effect. So make sure your dropout value is very optimal depending on the problem you're trying to solve. Okay. So dropout is the technique which is used to avoid overfitting in a neural network. Next question is what is the purpose of deep learning framework such as KAS transfer flow and PyTorch. So KAS is basically an open-source neural network library which is written in Python. So basically it is designed to enable fast experimentation with deep neural networks. Now, TensorFlow is another open-source software library for data flow programming. TensorFlow is mainly used in machine learning applications. Similarly, PyTorch is again an open-source machine learning library for Python. Its applications are mainly in the field of natural language processing. Now I'd say that these three deep learning frameworks are the most important when it comes to machine learning and deep learning because they have a varied set of functions in them which help in building a better machine learning model or a better deep learning network. Now let's look at question number 24 which is differentiate between NLP and text mining. So guys NLP stands for natural language processing for those of you who don't know. Now first of all let me clear out a confusion between text mining and natural language processing. A lot of people tend to think that text mining and NLP are the same thing. But text mining is the broader field and NLP is basically an application of text mining or it's basically a technique used in text mining. So the aim of text mining is to extract useful insights from structured and unstructured text. Whereas the aim of NLP is to understand what is conveyed in these texts. Now text mining can be done using text processing languages like Pearl and NLP can be achieved using advanced machine learning models such as deep neural networks. Now the outcome for text mining is you'll calculate the frequency of words. You'll understand the patterns between different correlations between two different words and you'll see how these two words occur together more frequently and why they occur together more frequently. So text mining basically will give you a more understanding about the words that are used in a document. Whereas in NLP you will understand the grammar behind the text. You'll understand in more depth about the language that is used in the document or in whatever you're trying to analyze. So that is the difference between NLP and text mining. NLP is a little more advanced field because you use deep neural networks to perform this. Text mining on the other hand makes use of NLP. Next question is what are the different components of NLP? Now there are two components of natural language processing which is natural language understanding and natural language generation. In natural language understanding, you'll basically map your input to some useful representation. This means that you'll try to understand the correlations in your language and it'll also include analyzing different aspects of the language. All right. So this is majorly about understanding your text. When it comes to natural language generation, here you'll understand how to generate text by having a brief plan about the text. You'll have sentence planning and you'll have text realization. Now, natural language generation will basically break down sentences or will break down text in order to understand it better. Okay, that's what natural language generation is. Natural language understanding is more about analyzing your language or analyzing the text that you have at hand and predicting some useful outcome out of it. generation is more focused on the planning aspect of your text. So these are the different components of natural language processing. Now let's look at what is stemming and leatization in natural language processing. Now what is stemming? It is an algorithm which works by cutting off the end or the beginning of the word and only taking into account a list of common prefixes and suffixes that can be found in inflicted words. Now for example on the screen you can see that there is detections detected detection and detecting. Now if you apply stemming on these four words it will lead to detect. Okay because at the end of the day detections detected detection and detecting is the same thing as detect. So stemming will help you remove all of these unwanted prefixes and suffixes. This way you can analyze the importance of the word. All right? You don't have to have extra suffix or prefix before the word. Now sometimes during stemming cutting of the ends of the words will form an inaccurate result. Okay that's why we have lemitization. In leatization the most important thing is the morphological analysis of the word. Okay. So here in order to perform leatization you have to have a detailed dictionaries which the algorithm can look through and it can form back to its lema. So the main difference between stemming and leatization is that stemming will just crop the prefix and the suffix whereas lemitization will try to understand the word in a grammatical way and give you an actual word as the output. Next is to explain the fuzzy logic architecture. All right. So the fuzzy logic architecture looks like what is shown on the screen. Okay. So basically the input is fed into something known as the fuzzifier. Okay. Okay, the fuzzifier or the fuzzification module will transform the systems input into a number of fuzzy sets. Okay, after that it's fed to the controller. Now the controller will have knowledge base and the inference engine. Knowledge base is basically a set of rules or you can say it's an algorithm which is provided by experts. Inference engine like the name suggests will basically infer meaning out of these rules. Okay. So once you've applied the rules to your input, you'll have to draw some useful insights or you'll have to infer these inputs. Okay, for that you use the inference engine. After that whatever inferences and analysis you've formed from your inference engine is passed on to the defuzzification model. Now the defasification will just give you a crisp output. All right, it'll give you a clear and cut output. That is the whole fuzzy logic architecture. Now let's understand the components of an expert system. Now there are three important components in an expert system which is knowledge base, inference engine and user interface. Now like I mentioned in fuzzy logic the knowledge base and inference engine will play the same part. The user interface is basically to provide interaction between the users of the expert system and the expert system. Okay. The expert system is basically um program that helps in decision making process. Okay. So here the knowledge base will contain some highquality knowledge or it'll contain rules and algorithms. The inference engine will acquire all the knowledge that is needed to solve the problem and the user interface is just for the users to interact with the expert system. Okay, this is the whole expert system component. Now obviously this is a little more complex than this but uh let's stick to how this works. All right, I'm just going to tell you the working of expert systems and fuzzy logic. If I start to explain each and everything, it's going to take a lot of time. All right. So, let's move on to our next question, which is how is computer vision and AI related. Now, computer vision is a field of artificial intelligence that is used to obtain information from images or multi-dimensional data. Now, computer vision is basically the concept behind the self-driving cars that you see these days, right? Computer vision involves a lot of image processing. So machine learning algorithms like K means can be used in image segmentation. Support vector machines can be used for image classification. Okay, that's how computer vision and AI are related because most of the things that happen in computer vision like image processing and segmentation make use of machine learning algorithms like K means and support vector machines. So to sum it up, computer vision makes use of artificial intelligence technologies to solve complex problems such as object detection, image processing and so on. That is the relationship between computer vision and AI. Now question number 30 is which is better for image classification? Is it supervised or unsupervised classification? So guys, earlier in the session we discussed what supervised learning is and what unsupervised learning is. In supervised learning, the images are interpreted manually by the machine learning expert to create feature classes. Now what this means is you're manually going to feed a label set of data to the supervised learning model. All right, that's how supervised learning works. You're manually going to feed a set of images which are labeled to the classifier. In unsupervised learning, the machine learning software creates feature classes based on image pixel values. So basically in unsupervised classification, the model itself has to figure out what to do and what not to do. Okay, so it'll create a own feature class based on some values such as image pixels or it can also use the image color or it can use intensity factors in order to classify. So if you ask me, it is better to offer supervised classification because you're manually inputting images with a lot more information. Okay? Whereas in unsupervised learning, you're totally letting the model perform everything. Okay? So in image classification, I think it's better to go for supervised learning. And with this we have come to an end to this full course on AI for business. If you enjoyed listening to this full course, please be kind enough to like it and you can comment on any of your doubts and queries. We will reply to them at the earliest. And do look out for more videos and playlist and subscribe to Adurea's YouTube channel to learn more. Thank you for watching and happy learning.

---
*Источник: https://ekstraktznaniy.ru/video/29632*