Andrew Ng’s 3 Week Intro AI Course in 25 Minutes| Deep Learning AI
24:31

Andrew Ng’s 3 Week Intro AI Course in 25 Minutes| Deep Learning AI

Tina Huang 22.10.2024 76 780 просмотров 2 619 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Check out https://bit.ly/genai_everyone for a lil’ cheatsheet! You can also try Posit Connect for free for three months by visiting https://pos.it/shiny-tina Want to get ahead in your career using AI? Join the waitlist for my AI Agent Bootcamp: https://www.lonelyoctopus.com/ai-agent-bootcamp This video is a speed run of Deep Learning AI Andrew Ng's Generative AI For Everyone course. 🤝 Business Inquiries: https://tally.so/r/mRDV99 🖱️Links mentioned in video ======================== 🔗Affiliates ======================== My SQL for data science interviews course (10 full interviews): https://365datascience.com/learn-sql-for-data-science-interviews/ 365 Data Science: https://365datascience.pxf.io/WD0za3 (link for 57% discount for their complete data science training) Check out StrataScratch for data science interview prep: https://stratascratch.com/?via=tina 🎥 My filming setup ======================== 📷 camera: https://amzn.to/3LHbi7N 🎤 mic: https://amzn.to/3LqoFJb 🔭 tripod: https://amzn.to/3DkjGHe 💡 lights: https://amzn.to/3LmOhqk ⏰Timestamps ======================== 00:00 intro 📲Socials ======================== instagram: https://www.instagram.com/hellotinah/ linkedin: https://www.linkedin.com/in/tinaw-h/ discord: https://discord.gg/5mMAtprshX 🎥Other videos you might be interested in ======================== How I consistently study with a full time job: https://www.youtube.com/watch?v=INymz5VwLmk How I would learn to code (if I could start over): https://www.youtube.com/watch?v=MHPGeQD8TvI&t=84s 🐈‍⬛🐈‍⬛About me ======================== Hi, my name is Tina and I'm an ex-Meta data scientist turned internet person! 📧Contact ======================== youtube: youtube comments are by far the best way to get a response from me! linkedin: https://www.linkedin.com/in/tinaw-h/ email for business inquiries only: hellotinah@gmail.com ======================== Some links are affiliate links and I may receive a small portion of sales price at no cost to you. I really appreciate your support in helping improve this channel! :)

Оглавление (1 сегментов)

  1. 0:00 intro 5073 сл.
0:00

intro

this is Andrew ning's gen for everyone course and I took it for you you're welcome so in this video I'm going to cover everything you learn in this course but a lot faster by removing all of the fluff so no more procrastinating okay watch this video to get a very good introduction to AI let's get straight into it generative AI for everyone is a very good foundational course that covers three main topics and they're supposed to be spread over three weeks of learning the first topic is how generative AI technology Works including what it can and it cannot do and common use cases the second topic is generative AI projects so it's a very practical section which includes identifying and building AI use cases as well as the Technologies you need in order to build a projects and third is the impact on business and Society this topic is more theoretical presenting how AI is currently and in the future going to be shaping the society that we live in including which and how jobs will be augmented and potentially automated he also does provide some practical tips on how the individual and team should be using Ai and where it is that we should be focusing on so first off what is generative AI Andrew defines generative AI as artificial intelligence systems that can produce high quality content specifically text images and audio generative AI is a subfield of the much larger artificial intelligence field as a whole and actually a large part of AI is focused on what is called supervised learning and generative AI can actually traces Origins back to supervised learning is just about providing an input and then getting an output while generative AI let's say like large language models is an implementation of this but in the context of words a large language model learns to repeatedly predict the next word for example to get the sentence my favorite food is a bagel with cream cheese you have an input that is my favorite food is a and the predicted next word could be bagel and that becomes your next input which is my favorite food is a bagel and your predicted output is with and so on and so forth so when you train an AI system on a lot of data like hundred hundreds and billions of words and their predictions you end up getting a large language model like open AI models that power chat gbt LMS can be really useful as a thought partner like for example if you don't know which animes you want to watch you could type some of your favorite animes and then ask the LM what are some other animes that you would recommend me look at they're also great as a writing partner you can ask it appr proof read the things that you write summarize long documents that you don't feel like reading and even write obscure little things like attune about trucks to encourage a 3-year-old to brush his teeth Andrew goes into more use cases later so I'll cover more later in the video as well but first Andrew presents a simple framework that can help us understand for how it is we can come up with good practical use cases for AI technology AI is called a general purpose technology another example of is electricity what it means is that it's a foundational technology that can have so many different use cases just how like electricity is literally powering almost everything around you AI is also something that will eventually be part of everything around you when we consider the use cases of llms we can divide them into two big categories the first one is called webbased and the second one is software based web-based application interfaces would include things like Chachi BT or Bard or Bing chat while software based applications would include like emo routing and document searching many of you who are watching this video probably have only thought about LMS as an web-based interactive platform like if you're just talking to it through chat GPT asking it to help you brainstorm stuff giving you summaries writing stuff for you but if you start thinking about it from a software-based application perspective there is so much more that opens up to you for example you may be working with a lot of review data like maybe you work at a company that sells a lot of apparel like t-shirts and hats and things like that you get a lot of reviews that are coming in from social media you can use a large language model in order to injust that information and then figure out that maybe it's a complaint and you can directly automatically root that to a complaints theart to deal with that you can also gather insights from the reviews that are coming in instead of having to read through all of the reviews and try to figure out the llm can give you insights like maybe you should make a blue colored shirt because that is what people are into these days chat Bots are also a great example of software based L applications a lot of companies I'm sure you've noticed have some form of chatbot where the bot is able to answer questions for you this is obviously very useful because then you don't have to spend a lot of money having to hire human representatives and then having to deal with like human problem s when owed by L these chat Bots get a lot better at answering people's personalized questions if you own a burger joint and you have a good LM power chatbot it's able to hold a conversation with a customer and take personalized orders then pass it on to the chef and then provide personalized notifications for when the order should be ready and when it's being delivered and things like that in addition to stages like chatting with chat gbt there is so much more that you can do when you start thinking about how to incorporate llms from a software perspective on figure out what it is that LM can or cannot do as a general rule of thumb the question to ask yourself is can a fresh college graduate follow the instructions that I have provided in the prompt to complete the task some general assumptions to keep in mind one that this college graduate has no access to the internet or other resources two it has no specific training on your company and your business and three it has no memory of previous tasks that it completed you essentially get a fresh different college graduate every single time you should also keep in mind that each model has a know cut off dat so if you asking questions after it's cut off date it's not going to know the answer like for example if a model's cut off date is October 5th 2024 it would have no knowledge that the second season of blue lock has just dropped and cannot provide you any information about that I am very excited to watch that season after I make this video hallucinations another consideration otherwise known as making stuff up apparently this is like a real incident in which a California lawyer used Chachi BT in order to come up with this argument and he cited two different Court cases that did not actually exist so that was pretty awkward and you can see that if this wasn't caught it could potentially lead to very serious consequences in court there are some techniques to reduce hallucinations and I'll actually include them in the resource companion of this video uh but it is very important to know that this is can be a very serious problem another consideration is that large language models have an input and output length that is limited so you cannot provide an infinite amount of information in the prompt and you can't expect it to provide an infinite amount of information back as well like for example if you're trying to like feed it a 10,000 page essay and tell it to summarize everything it's not going to be able to do that uh War if you're telling it to like write your entire book for you it probably can't do that either and finally generative AI does not work that well with structured tabular data like for example if you give it a table and tell it to like query things and then give you information about it it's actually not that good at doing that but it is very good with unstructured data like unstructured text images audios and videos if you tell it to tell you information about these things it's very good at doing that one more thing bias and toxicity so an llm can reflect the biases that exist in the text it was learned from so if for example the training data keeps indicating that um doctors are referred to as he then the LM would want to refer to a doctor as he as well while in the other hand LM could have internalized that nurses are often she so it would just automatically when told it's a nurse it would just be like she did this compared to other AI courses this course's approach towards prompting is a lot more conceptual uh so instead of giving you a bunch of different like patterns to work on and different exercises it much more just helps you understand how it is that you should think about prompting and there's a big emphasis on telling you that prompting is an iterative process where you need to kind of build up your prompts and just build up that intuition for how to prompt with that being said though Andrew does have a few tips the first one is to be detailed and specific by first giving it specific context on what it is he wants you to do to describing what the task is in detail and including what it is that you want the end result to look like another tip is to guide the model to think through its answers for example if you want the llm to help you write an email to be assigned to a new project you could write just something like help me write an email asking to be assigned to legal documents project but you would get a very generic result so to make it better you can provide it with context such as I'm applying for a job on the legal documents project which will check legal documents using LMS I have ample experience prompting LM to generate accurate text and a professional tone then you can provide it details for what it is you want it to do the exact task in detail write a paragraph of text explaining why my background makes me a strong candidate to this project and advocates for my candidacy the result of this is an email that's going to be a lot more tailored and also with a lot more Nuance remember earlier how Andrew said that you should think about the llm as a college student that has the abilities but does not have any like specific knowledge or information about whatever it is that you're trying to do since the LM has generalized abilities and does not have any specific information or details when you have a task to give it it's best to break it down into subtasks so it's easier for the LM to follow and perform each step correctly for example if you want to brainstorm five new names for a cat toy that you're making so you can split this task into three different Steps step one is to generate five fun joyful words that relate to cats and the LM will come up with things like Pur whisper and Feline then step two is to ask you to rhyme names for a toy based on those words and step number three would be to add a fun relevant Emoji for each toy name does that make sense so you're kind of guiding it on a step by step to reach kind of like the end result that you're looking for as opposed to just being like come up with cool cat names for me and the third tip is to experiment and to iterate Andrew actually explicitly says that he doesn't really like it when companies where like resources would just be like here's the 10 perfect prompts to achieve this task and the reason is because prompting is more of a skill set as opposed to just like memorizing prompts themselves I guess it's similar to saying that if you're trying to learn to code instead of trying to memorize syntax and memorize like other people's chunks of code it's much better for you to understand how to write good code yourself just write the prompt check what the results are and then just iteratively build upon that his process is one be clear and specific in prompt two think about why result isn't giving desired output three refine your prompt and then four repeat the process until your desired prompt and output couple cavas is that be careful about confidential information and whether you trust the lm's output or not like don't be like that lawyer from before if it's something that's high stakes then don't rely on llm to be sure that it's giving you a correct answer all right let's now move on to generative AI projects this portion of the video is brought to you by posit whose mission is to create open source software for data science scientific research and Technical communication they build a lot of cool open- Source free products that I've talked about a lot in the past including shiny to quickly and easily deploy interactive web applications Cal which is great for programmatically creating documents showcasing your projects and dashboards and just displaying all sorts of information in an organized fashion I'm actually also making a company resource to this video using cordal since I'm going to be sharing a lot of information this video so it's linked in description anyways another product I'd like to highlight from posit is posit connect for Enterprise a very common challenge especially if you work with proprietary or sensitive information and strict it is that after you build a project you then have to share it with other people but you have to do it in a way that the data and the analytics stays secure that is where posit connect comes in to help you deploy and share and manage all of the analytics products that you and your team build allinone a secure scalable and easy to manage location with posit connect you can rapidly and securely deploy what you build in Python with streamlet Dash bouquet fast API shiny flask quarter reports and apis it comes with all the bills of whistles to support it and Enterprise requirements you can try posi connect for free for 3 months with this link over here also linked in descript description now back to the video remember that simple framework we talked about the first one being interactive web applications like directly talking to chat GPT versus software-based applications where you have applications that are powered by large language models so this entire section of the course is focus on these software-based applications which I think from Andrew's perspective is like the next step from just interacting directly with the web interface in the course itself he does include a couple optional sections where you can go in and actually play around with the code I think he's trying to show that it's not that hard to actually build an application that's powered by lmms so Clif no version of your traditional AI models and their applications versus LM powered prompt-based applications when you have a traditional supervised learning AI application you need to have a lot of data to feed the AI for example if you're looking at restaurant reviews you need to have labeled data that would have a review and then have the output of being like positive or negative for example best soup dumplings I've ever eaten you have to label that as a positive output while not worth the 3-month wait for reservation that would be labeled as a negative output so you need to have all of this different data to train your AI which the training itself is also like a huge pain in the ass and quite expensive as well um in order to create a model that's able to do that but on the other hand if you have prompt based development all you have to do is programmatically prompt the LM like classify to following review as having either a positive or negative sentiment and then you write the review which is the banana pudding was really tasty then you would get the response output which is that it's a positive review that's it no need to have labeled data no need to feed it to anything no need to train anything so here is a comparison workflow for supervised learning it takes like a month to get label data another 3 months to train the AI model on the data another 3 months deploy and to run a model while for prompt-based AI it takes minutes to hours to specify the prompt and hours to days to deploy and run the model there is the caveat though that building generative AI products it there is a lot of experience mentation and a lot of testing that's involved to catch the mistakes and to fix them since it is based upon prompting the results that you're getting back um you're also not exactly sure what kind of results they're going to look like in order to improve the results from the LM there's a few tools that can be used the first one is just better prompting like learn to be a better prompt engineer essentially the second one is called retrieval augmented generation otherwise known as rag this is a technique where you can provide specific information to an llm that it would otherwise not have known about for example for a specific company you might be interested in asking uh is there parking for employees the LM is not going to know that answer because it doesn't know any specific information about that specific company but you can provide a company documents for example a document that can include where it is that employee parking is then the llm would be able to reference that document and to answer your question so it could tell you that according to the onboarding policy employees are supposed to park on parking levels 1 and two another technique techque that researchers and developers use in order to improve an L's results is called fine-tuning when an L is created its first pre-train on a lot of information like all of the information on the internet this makes it really good at predicting word after word and develop its like General reasoning abilities but maybe you wanted to behave in a very specific way for example maybe I wanted to speak exactly the way that I speak this is pretty hard to do it purely through prompting uh because you could be like talk like Tina but there's like one does Tina talk like right where I can start trying to like specify oh like whenever Tina uses this word then you got to do this like Tina uses the word um a lot Tina likes to use these specific words but not those words and she likes to throw in random anime jokes as you can probably tell the list can get really long and it's hard to be very specific so a solution to this is fine-tuning where you can provide additional examples like for example how Tina speaks like lots of transcriptions of Tina talking and then you're getting that model to adjust itself based upon those examples and these don't need to be like the billions of words that it was trained on was pre-trained on you only need like maybe even 100 a few thousand examples of this in order to tweak the model so that it could sound exactly like me another example for why you want to use fine tuna is if you wanted to gain specific knowledge for example medical nodes can look something like this and that kind of looks like gibberish to the average viewer and your average large language model so you can actually fine-tune the mod model um by showing it these medical notes so it's able to summarize using this style all right the final topic of this course is the impact on business and Society because AI is a foundational technology its use is widespread and will grow even more widespread in the future he gives examples for people in all different jobs can use it for a variety of different ways like a marketer can be using it for marketing strategy a recruiter can use it for summarizing resumés a computer programmer can use it as a coding companion etc I really like that the course does also address the idea of augmentation and automation of jobs so the argument presented is that AI doesn't automate jobs it automates task so how do we identify automation opportunities when we think of a job it's actually made up of a lot of different tasks if you're a customer service representative your task can include answering inbt phone calls from customers it can include answering customer chat queries uh checking status of customer orders keeping records of customer interactions etc etc so it's much better to think about AI automation on the level of task then you can go through these tasks and label How likely it is that AI will impact that specific task for example for answering customer chat queries the ability for an AI to do well and this is very high but for example in terms of answering inbound phone calls for customers at least right now the ability of an AI of doing that is relatively low so next time you thinking about oh like is this job going to be impacted by AI in the future think about it as in what are the tasks that comprise of that job and then look at the tasks and see if AI has the ability of automating or impacting those specific tasks and if more of those tasks can be impacted and done by AI then the likelihood of the entire job being impacted by AI is also higher there's also a difference between augmentation versus automation when an AI is augmenting something it means that it's helping a human complete this task for example if you're a customer service agent and augmentation using AI could be the AI recommending a response for the agent but the agent having a final say whether it wants to edit that response or approve it automation under the other hand is automatically completing that task without the intervention of the human for example in customer service and automation would be transcribing and summarizing records of a customer interaction what's probably going to happen for most tasks is that we'll start off with augmentation and still keep a human in the loop but over time some of these augmentations could turn into automations in the future if you are an engineer or someone who's looking to build an AI product um a good place to start for you is actually also to go and look at these tasks and see which are the tasks that AI can feasibly do and also has high return on investment it's a pretty good place to start a tip is that don't Define a job purely on what is considered the main task at that job for example if you're a software engineer people would think oh like a software engineer does programming but actually like a software engineer's main job like most kind of prominent thing could be programming but there's a lot of other things that a software engineer does as well such as communicating with stakeholders um producing product specs reviewing other people's code Gathering requirements and writing documentation and actually writing documentation is a huge part of a software engineer's job even though it's not like the main thing that people think of so it may be the case that AI could be really useful for task related to a job say like writing documentation and it can be very impactful to build a AI tool that's done is that even though it's not the main task of a job itself for a lawyer maybe the main thing of a lawyer is you know being part of a court case but you know large part of what a lawyer actually does is referencing a lot of legal documents and having to go through that so if you can create an AI application or a tool that can help lawyers get through a lot of documents a lot faster that could be a very valuable tool to create and even better than that if you can identify a task where there's a lot of overlap from a lot of different professions the impact of what you create can be even greater there's a research study from 2023 by eleno and all uh which showcases that gen will overall impact higher paid jobs more than lower paid jobs because uh most of the impact will be on knowledge workers a Mackenzie report shows that 75% of total annual impact of generative AI is just on six different functional roles which are sales marketing product R& D software engineering for product development software engineering for corporate it and customer operations another concern is amplifying Humanity's worst impulses since LLS are trained on the text on the Internet by humans the biases and toxicities of human beings is also going to be present in that of large language models so it's really important to like be aware of that and that's why having humans in the loop and then processing information is going to be very helpful to make sure that your large language models isn't just simply an extension of bad biases concern number two is job loss Jeffrey Hinton who is kind of like the Godfather of AI in 2016 said that if you work as a radiologist you're like the coyote that's already over the edge of the cliff but hasn't yet looked down so it doesn't realize there's no ground underneath them people should stop training Radiologists now it's just completely obvious that within 5 years deep learning is going to do better than Radiologists however it's been 5 years now and guess what we still have Radiologists so Curtis lules langlet professor of radiology from Stanford University countered this by saying that AI won't replace Radiologists but Radiologists that use AI will replace Radiologists that don't therefore to survive in your industry the best way is to learn Ai and final concern of the day concern number three is human extinction there have been examples Andre ackn knowledges of harm that has been caused by AI unjust sentencing in criminal cases however Extinction arguments Andrew argues is not yet very concrete and most arguments Bo down to the fact that it could happen but there hasn't been any very good evidence to support the fact that this is going to happen anytime soon like I said Andrew himself is clearly an AI Enthusiast uh but I really do appreciate how much effort he puts into doing the research and then just conveying to us that there are harms and concerns that we should be thinking about and he's calling people to start addressing these concerns now he says that Humanity has ample experience controlling a lot of things more powerful than a single person like corporations and Nations and states and lots of things that we don't have full control over either but nonetheless are considered valuable and safe for example Le airplanes if we look at the real risk to humanity such as climate change and pandemics AI will be a key part of this solution the next fronti here that AI engineers and researchers are looking at is called artificial general intelligence or AGI so this is a threshold in which an AI can do any intellectual task than a human can for example it can learn to drive a car um in around 20 hours of practice apparently that's all how long humans take it can complete a PhD thesis you can do all the tasks of a computer programmer um basically AGI is when a artificial intelligence truly has a generalized intelligence to be able to do anything that a human can do and companies like open AI like Google they're all focused on developing the first real artificial general intelligence we don't know when it's going to happen or even if but regardless AI is clearly here to stay and my personal opinion is that just by the progress that has been made in the past few years or really even just the few past few months I can't even imagine um what things will be like in the next few years so I think if you're going to learn any new skill set you should be learning Ai and with that conclusion you have also finished this speedrun cliffnote version of generative AI for Everyone by anding so I hope this was a good introduction and summary of engineering's course and introduction to Ai and let me know in the comments if you want me to do more of these kind of like speedrun cliffo version of courses I will if you're interested in me taking these courses and talking about them and I will see you guys in the next video or live stream

Ещё от Tina Huang

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться