AI on the Edge: Watch This Before Buying a Raspberry Pi HAILO Accelerator

AI on the Edge: Watch This Before Buying a Raspberry Pi HAILO Accelerator

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (9 сегментов)

Segment 1 (00:00 - 05:00)

Hello guys, this is Paul McCarter with topteboy. com and are you ready to rumble because I've got a great program for you today. And what we're going to be doing is looking at AI on the edge. Sound good? I hope it does. I will need you, of course, to go ahead and pour yourself a nice tall glass of ice cold coffee. That would be straight up black coffee poured over ice. No sugar, no sweeteners, none needed. And as you're pouring your coffee, let me go ahead and give you a few introductory remarks about what we're going to be doing today. And what we're going to be doing is we're going to be taking one of our first looks at AI on the edge. What does AI on the edge mean? It means what can you do with AI on your desktop without being connected to the internet. It's what type of AI can you run and you can you develop on your desktop. Now maybe you would connect to the internet to download something or upload something but you're not running like a chat GPT where the AI model is up on the cloud and you're sitting there just interacting by giving prompts. We're going to be seeing what can we do here locally. What will we be using? We will be using the Raspberry Pi 5. My Raspberry Pi 5 has got 8 gigabytes of memory. Probably just about any Raspberry Pi 5 would work for you, but probably the four would be a little bit on the slow side. So, we're going to be using the Raspberry Pi 5. Let me go ahead and show you my gear here. So, I will get out of the way. And very important, what I want you to see is that I need to get my pointer. I want you to see that I'm running the kind of bare Pi vibe. I've got a fan on it, but I want you to see I'm not going to be using a GPU accelerator hat. We're just going to go bare bones on the naked Pi and just see how many frames per second we can get out of the Raspberry Pi 5. You notice I've got a Pi camera hooked up. I've got it connected to this little pan tilt bracket. We won't be using the pan tilt bracket today, but I just kind of have this set up because it's a useful setup as we move forward. If you guys are interested, I can do more classes like this. Okay. Specifically, what are we going to do today? We are going to try to do object detection. Object detection is where you grab a frame from the camera. You look and see if there's anything that you can recognize. And if you recognize something, you draw a box around it and label it. Now, there's a lot of different ways that you can do object detection. Today, we're going to be looking at YOLO, which means you only look once. It's a very efficient model for object detection. And because the Pi 5 is on the kind of low resource side for really doing full-blown artificial intelligence, the YOLO 11 is a good model for us to see if we can run it on the Raspberry Pi 5. Now, the real question is it will run, but the question is, will it run fast enough to be useful? Now, if it's not useful, we can always go in and look at getting those accelerator cards, the little GPU Halo hat that you can put on the Raspberry Pi 5. But today, we want to just see what can we do using nothing but the Pi 5. Okay. So, the first thing that we're going to have to do is we are install YOLO, which is the you only look once model. And that's going to take a little bit of work here, but never fear. I have it all set up and you guys will just can follow along with me. Okay. So, let me switch over here to my uh Raspberry Pi 5 and we'll be doing all of that here. Now, this is very important. I am operating on a Raspberry Pi 5 bookworm operating system. You might be saying, Trixie, why aren't you using Trixie? Well, the problem is that Trixie is a very recent operating system for Raspberry Pi. And the challenge we have is a lot of the models and dependencies and libraries might not have been ported over to the uh Trixie yet. And so we're going to start with Bookworm. Now, the other thing that I should show you is it's a little bit of a finicky installation we're going to do today. And so, I am starting with a fresh Bookworm 64K and the full installation. I'm starting with a fresh SD card because if you

Segment 2 (05:00 - 10:00)

start with that and you follow my procedure, we will end up at the same place. For an installation like this, sometimes it's good to just have a separate SD card to do your YOLO work on and to do your artificial intelligence work on. Okay, enough of this talking. Let's see if we can jump in and get YOLO 11 installed. Now, to keep you from having to look at my little small screen and sitting there trying to look at the screen and type in, I've got all the commands that I'm going to use today put together for you on the most excellent www. top toptechboy. com. You can use this happy little search icon here and search on something like AI on the edge. Install and run YOLO object detection. Now, uh again, I'm starting with Bookworm and I'm starting with a fresh SD card. So, we're going to come in here and this Wayand business on uh on the Raspberry Pi, it doesn't play very well with a lot of our graphics programs. So, the first thing I'm going to need you to do is switch from Whan to X11. And how do you do that? You come in and you do a It's the first step over here, but I'll just type this one in. Sudo raspy config like that. Okay. Then you need to come down here and where you're going to go to is advanced options. Okay. And then what you're going to do is you're going to come down to the uh Whand. Okay. And then you want to select X11. say okay. And then it will switch you to X11. ask you to reboot. I've already done that. And I don't want to reboot. So I'm going to cancel out of here. But you see what you need to do. So that's good. Now we need to update and upgrade to start with. So, we're going to do a pseudo apt update. Okay. And that's going pretty fast. It says that's all good. And then you'll do a pseudo apt apt. Now watch apt. Watch carefully. Full upgrade and then m minus y. And if you prefer, what you can do is you can come over here and you can just copy. You see, I've got each line by line what I'm doing. Now, I had already upgraded mine, and so I didn't want to wait here 30 minutes for mine to upgrade. So, mine's already done. Now, if you haven't upgraded in a while, it might take anywhere from a few minutes to 30 minutes, but you can just pause the video and then let that step get done. Okay, I think I need to move this up a little bit so this window doesn't get behind my head. I think that looks good. Okay. So now what we're going to do is we need to install OpenCV. So I'm going to come and I'm going to get this line of code come over here. It is sudubo apt install python 3 open cv. So I will paste that and then I will hit enter. And that went really quick. Why? Because I'd already installed it. So you guys it might take five minutes or so for that to install but you can just follow along with me here. Now, we want to see if the install worked correctly. So, you want to get this line of code. You can come over and scroll all the way over like that. Copy. And what we want to do is we want to see if the Open CV installation went well. Okay. Yes. So, I have version 4. 11. Hopefully, yours is something similar to that. Now, I want to kind of futureproof this lesson. And so what I've done here is I've put all the commands here on this page. Now if something changes in six months, what you want to do is you don't want to do what I'm saying here. You is on this page. I will update this page if something changes in the next six months. So just understand if I am doing one thing on the screen, just do what is in this document. I will keep this document up to date but hopefully it won't change very much. Okay, now what we need to do is we need to install media pipe and I have scrolled off the screen there. Okay, let me get back where you can see things. There it is. Okay, so now what we're going to do is we are going to install media pipe. And so again, I think I could type this faster than trying to copy and paste sometimes. Ah, okay. Copy. And this is going to be

Segment 3 (10:00 - 15:00)

media pipe. And we don't use media pipe today, but it's a very useful program to be in our suite of AI on the edge tools. Now, notice it's very important. you're installing media pipe, but you're saying to break. And that's because if you just try to install media, you're going to get an error that you have an externally managed Python environment, which means that you kind of have to use the Raspberry Pi installation. But if you say break, it will allow you to get around that. And very likely, we could break other things. But then again, we are just using this for an SD card dedicated to this AI project. So, let's hit that. And look at that. There it goes. Already satisfied. Yours might take five minutes or so for that to work. We're going to come over here. And now we're going to see if that worked. And so I copy that whole line and we should see media pipe shows up. I'm just basically doing a little program that imports it. And boom, media pipe version is there. So you just don't want to see no module found. That means that the installation did not go according to plan. Okay. Now what we want to do is we want to install a virtual environment. Okay. Because this YOLO really wants to learn run in its own virtual environment. And so we're going to get this Python minus MV environment site packages. What is site packages for? That means when I create the Python virtual environment, I want to bring in the components from the main Python installation. Why? Because that one has the PI camera 2 libraries and modules in it. And if you just Python, you don't get that. And then if you try to install Pyamera 2, all types of things go wrong. And so we're creating a virtual environment where we're taking our installs from our main Python program. Okay. So, we're going to do that. And that creates a virtual environment where you have your own little Python world in there. So, that went well. Now, we're going to see if we can activate it, which means we're going to go into that virtual environment. And anytime you want to go into the virtual environment, you do this. Okay. The source is YOLO, which is the virtual environment that we just created, slashbin slactivate. And when we do that, boom, we are now in the YOLO virtual environment. Now, we want to install YOLO. Okay. And you want to be inside of the virtual environment when you do that. So, we're going to come in. We're going to paste this pip install. And Ultra Analytics is the YOLO libraries and the low uh YOLO framework. Okay. But you notice that we say install less than 2. 0. 0. Why? Because when we went and got that Python program, it gave us a very advanced numpy that the YOLO and Ultra Analytics doesn't play well with. So, we have to kind of force it to use this lower version of NumPy. Okay. So, we'll do that. And uh it's already done. Yours will probably take several minutes. Now, we would be ready to write a program now and do object detection. But what I got to tell you is it probably would not run very fast. All right? fast because it's the fullblown version of YOLO, which is a little bit heavy even for a Pi 5. So, if we want to run at more than three or four frames a second, we need to create an optimized model for the Raspberry Pi 5. And so, we're going to export the model into the format NCN, which is the format that will run very snappily, very stylishly on the Pi 5. So, we're going to get that. We're going to come over here and we are going to paste. And uh that's actually doing the export it looks like. So we might have to sit here for a second. Okay, there it is. Now we ought to see that. Now if I come to my folder and remember that I was in my home folder and now what do I see? I see this folder of the model that I just created. So this is YOLO which has the virtual environment in it. Okay, it And then this is the YOLO model optimized for the Raspberry Pi 5. And so just make sure that you've got those two things there and that will all be good. Now we want

Segment 4 (15:00 - 20:00)

to make sure that we got it installed. So we're going to come down here and we're going to get this Python minus C get that line. Come over here. Okay. And then what this is going to do is it's just going to start Python and it's going to import the YOLO uh library. And then we're going to see if it is there. Okay. So, we go like that. Okay. Ultrasonics YOLO ready. So, that means that we were we do have YOLO installed. Man, you guys, we are halfway there already. Okay. Now we are ready to start writing a program. And so what I need you to do is open up your Thonnie. Okay, it's this one. All right, there it is. Okay, so we got our Thawn open. And what you got to do is you got to get Bonnie to use that virtual environment that we just created because we have to be using the Python version and the Python install that has all of those things that we just did. So you come up here to tools. You come over to run and you say configure interpreter. Now yours probably has like users bin slash uh Python something like that. But here you're going to have to navigate to what we just installed. Okay. And so on the navigation you want to kind of go to your homepage. Okay. And then what you want to do is you want to come to the YOLO virtual environment and then you want to look in bin point at Python like that. Okay. So now it is going to run. Now we don't even have to go into the virtual environment anymore because when you open up Thonnie now it is going to go into that Python that is in the virtual environment if that makes sense. I hope it does. Okay. So now I want to just start with a basic core program that will use Open CV to just go out and grab a frame and show a frame. And I don't want to type that thing in. And I'm going to go ahead and give you guys that to start with. It's on the same page. Here it is. You can copy it and then you can come over and you can paste it. And I'll just talk about this really quickly. We're importing CVT CV2 which is the Open CV that we just installed. We are importing the PI camera and you do need to have your PI camera connected as I showed you. And uh you're going to import the PI camera. It's going to work because right we brought that into the virtual environment when we did the you know include the site packages and then here we are starting the pi camera. I'm going to run it at 1280 pixels wide 720 high and then this is just setting up the pi camera. So I say use the resolution that I specified here. You want a format of RGB888 as you see here. I'm going to go ahead and set it at a frame rate of 60 to see if we can run this thing at 60 frames a second. And then we are going to uh align, which is just basically getting these image frames to be in the right format that NumPy wants. And so this just sort of makes it run faster. And then we're going to go ahead and this preview that we just set up, we're going to configure the pyam with the preview that we just set up. and we are going to start pyam. Then this frames per second, we're going to be calculating that on the fly to see how fast we're going. And this is a very simple program. We grab a frame and then we are calculating here the frames per second that we're running at. We are going to put that calculated frames per second on our open CV window. And then we are going to show the frame. And then uh I think I I'm not going to move the window. So, I'm going to take that out. You can leave that in. This just puts it in a specific spot on your screen, but I'm going to leave mine where it will move around. And then this is just saying this is how you quit. That if you press Q, you will quit. Okay. So, let's see if we have a working camera at this point. Okay. Boom. Look at that. All right. And so, here I am. And look at that. We are running here at 47 frames a second, which is pretty darn amazing. Now, the thing is trying to run at 60 frames a second, but the Open CV is not quite keeping up. But that is just incredibly fast. So, you can see right off the bat, the Pi 5 has some real horsepower because and just grabbing and showing a frame, we're almost up at the 60 frames per second. So, that's a really good result.

Segment 5 (20:00 - 25:00)

So hopefully you're getting something in that neighborhood as well. Now, how do we do object detection? So you can see that you can see me, right? And you can see that I'm running at almost 50 frames per second. And you see I have a banana, but nothing really very exciting is happening. So let's see if we can get it to recognize things. So remember to get it to quit, you want to press Q. And that's kind of a good way because it exits it real cleanly to press the Q. And now, how do we do object detection? All right. Well, the first thing that we're going to need to do here is after import time, we need to go ahead and we need to say from Alraanitics Ultralitics. That's a strange one. Okay. From Ultralitics, we're going to import YOLO all caps. Okay. So, that will import that uh library and that object detection uh library that we just installed. Okay, does that sound good? I think that will be good. Now, what we're going to need to do down here is after we've got the camera all up and running, then what I'm going to do is we need to load the model. So, our model is going to be equal to yolo, the method that we just imported. And now we are going to put in that model that we just downloaded. Remember when we converted the normal model to our model and so what where we put it I am at home. Okay. And then my username is PJM. You need to use your username and then you need to do yolo lind that's our model_n model. Okay. And you should have created that. And I didn't put my open over here. So, we're going to come here, open like that. Okay. And now, what task do we want to do? We want to do object detection. And so, we're going to say detect like that. And then we're going to close that up. Now, you got to make sure that you use the right path here. So, let's see if we can open up a browser. And then this is that uh this is that model that you created. You can right mouse click on it and you can say uh copy the path to that folder. And then if I come over here and I just paste like that, it gets it. Okay. And that might be the easier way for you to do it if you're at all confused where yours is. That will work. find it in the B, you know, find it in your file navigator and then put it in. I'm going to go ahead and run. It's not going to do anything, but I just want to see if we get those things installed. You know, if we are able to load those into our program. Okay. Boom. Look at that. Not doing anything, but we didn't error out. And so, that is a really good sign if you can get to that point. And now here we are just starting our frames per second counter. We've got the model in there. And then we've got the start time uh in there for our frame counter. So now here we have we grab the frame and this is where the magic happens between grabbing the frame and showing the frame. That is where the magic happens. And so we have our frame now. And so what do I want to do? I want to analyze the frame. And so I want to get some results. And those results are going to be equal to uh my model that I just created up there. Okay. And then we want to look at the frame that we just grabbed. And then I want to look for anything that if a confidence of let's say. 3. Now, if you put a confidence of 0. 1, it would find everything and think it's a banana. Okay? If you put a confidence of. 99, no matter how many bananas you show it, it's never going to recognize it as a banana. And so, you play around with this to get it where you're not getting a lot of false alarms, but where you are pretty much finding things that are of interest. Okay. And now I'm going to say equal false. Okay, verbose equal false.

Segment 6 (25:00 - 30:00)

And I need to I am having a little bit of an issue with my studio lights. So let's see if we can turn them off and turn them back on. Okay, that's good. It was kind of glitching on me and I didn't want to sit there and put up with that. Now verbose equal false. If you just run this model, it prints out a bunch of stuff and prints are slow statements. So the printing slows things up. So, I'm going to turn that printing off. You can put it at true and then you can see a lot of interesting stuff that it's printing out, but I'm going to turn it off because we're all about the frames per second. second and this will run faster. Okay. Now, what I'm going to do is I'm now going to decorate the frame. That frame that I grabbed. Now, maybe it's found something interesting. So, I want to decorate my frame or annotate my frame with the things that it found. And how do I do that? I say that it's going to be equal to model. No, we already did the model. It's going to be results and it's results zero like that. Okay. Got to get it just like that. And then dot plot. Okay. So, I'm going to make a frame that has the original frame and has the annotations on top of it. Okay. Now, could it really be that easy? Should I be ex I better move my cup of coffee. Should I be extra confident and just have my banana here just in case? Okay, just in case. I'm going to have my banana here. And then we're going to come up here and we are going to run and and boom. Look at that banana. Giddy up. Do you see that? Look at that. Okay, now it sees me as a person and believe me that makes me happy. Uh I had one model and it kept saying I was a teddy bear. Okay, but it is seeing me as a person and seeing the banana. Okay, now what is the really amazing thing? I'm going to get further out of your way here with this guy. What is the really amazing thing here is we are operating at 10 frames per second without using a GPU. And you see this is completely usable. Yeah, we'd love to be running at 60 frames per second, but 10 frames per second is really pretty smooth. And this thing is just going to town. You've got me and you've got the banana. Okay. But what about the bottle? Me, the banana, and the bottle. Okay. Now, this one is always kind of hard. Now, one of the things you see is anything that you have in your hand almost is always going to guess a cell phone. Now, why is it almost always going to guess a cell phone? Because when they train, they train on cell phones and people's hands. So, when it sees a hand, sees something in it, half the time it's going to get guess cell phone. But let's see if I can put the bottle over here. We'll leave the bottle there. Okay. And now over here, I'm going to see maybe it's over here. Okay. Remote haird dryer. Mouse. Okay. I need to put a donut. Okay. It's not seeing the mouse very well. I told you it was going to think it was a Okay. You see, I'm showing it angles that it hasn't trained on very well. So, let's see. I did not want to do this, but let me see if I can reach over here and just see if we look at the mouse. Yeah, it sees it as a mouse if I just point it at the mouse. But you see, that's normally where you would see a mouse. You would normally see the mouse on the desktop and it likes that. And then come back over here. There's the bottle. There's the banana. Boom. And guys, we are holding up there at 10 frames a second. Now, because I'm going to put this in my hand, we might run into the cell phone problem again. But let's see if we see the keyboard. Boom. We see the keyboard and we see me and we see the banana and then we see the bottle. Guys, is this amazing? You guys leave me a comment down below. Let me see if you uh see that. I've been drinking coffee. Perhaps I need to brush my teeth with the toothbrush. Like maybe I'm confused. This is the toothbrush. That's the bottle. We brush our teeth with the toothbrush. Now this one I'm going to tell you this one it really struggles with. It really struggles with Mr. Carrot. But let's see if we can get it to see. It tends to think that this is a lot of times. Yeah, a lot of times

Segment 7 (30:00 - 35:00)

it wants to make it there's something carrot. Okay. Baseball bat. Okay. Let's see. There's sometimes an angle, but I told you that it struggles with the carrot. And one of the reasons might be this is a real carrot, a fresh carrot. And you know, most carrots that you have are very bright orange. So it might be looking for something bright, very bright orange to call it. So we'll say the old carrot. The old carrot is not working very well. Okay, that one Let's see what else it can find. Okay, that was loud. No harm done. Don't worry. Okay, so then here we're going to come up. Try Let me get the bottle out of the way. And we are going to see a book. Okay. A book. And you got to kind of give it an angle that it looks like a book. I'm not sure. Yeah, it seems to do pretty good. Okay. Book. Again, things in the hand look like a cell phone. How did I have it that it whacked it? like this book. I'm not sure why it thinks that is a knife. Okay, book. It seems like that book and uh book and carrot it seems to struggle a little bit with. But guys, we haven't optimized this at all. This is just the flatout straight download from the uh you know from the model. We haven't optimized anything yet. But guys at 10 frames per second without an accelerator that is absolutely pretty amazing. Okay, I want to show you something else. I'm not going to show you how to do this today, but you guys can let me know and if you want me to make a video on how to do this. You can use all types of cameras. I just use little pieces like an IP camera. And so, let me see if I can come up here and run it on my IP camera. And if you guys want me to make a video that shows you how to do this, I will do that. But let's see what we are doing here. Let me come back up. Okay, let me see here. Where is my mouse? One of the things that I notice about the Pi is sometimes the wireless mouse does not work very well on this and it's a little annoying when you're trying to do something. Okay, so let's stop that. Stop. Okay, let me see if I can get this to work. Waiting. Waiting. Okay. Boom. Here we are. Now, that is an IP camera. All right. And let's see if we can find anything interesting here to look at. Let me move this a little bit. We're holding up at a respectable eight frames per second. But let's see if there's anything to look at out there. And this is annoying me. I've got to log on to the camera to move it and all the interesting things. Okay, this is getting serious. H. It was on caps locks. How many times you guys done that? Okay, now this time we're going to hope that it works. These wireless keyboards are killing me. Okay. So, I am logged on to the camera from the control. And so, now what we're going to do is we're going to move around and see if we can find anything interesting. So, let me just start by backing up. And today is a cloudy day outside. So, the fishermen seem to be taking the day off. Plus, this is uh this is the New Year's Eve and so maybe

Segment 8 (35:00 - 40:00)

people are kind of people are uh taking the day off perhaps. We're working at a good nine frames per second. That's good. Let's come down here. Let's go up. Up over. And let me zoom. Banana. Do you see my bananas there? Look at that. It recognized a bunch of bananas there. So, that is really pretty exciting. I really want to see some action out on the river. Let's see if we can find anything out on that river. I think all of our quaint friendly little fisherman neighbors are taking the day off. I'll just scan around a little bit. Not seeing anybody. And you know, as I was trying to log on with my caps lock, there was a big tour boat, a big motorboat that went by with a bunch of people in it. And that would have been great. But let's find something else to look at. Okay, so I'll just switch this around and then I'll come down here and boom, two chairs. Look at that. Okay, so now let me come up here and this Seeing those chairs. Look at that. It's seeing the dining room table and the chairs. And then here we can look down. And it is a champ at finding these chairs. Huh. Okay. And then let's see what else. If there's anything else, this is just right outside. Like when I'm looking up, I'm seeing those chairs. So those are right in front of me. see if there's anything else that we can find. Uh, let's see. I probably can't zoom that far. What we've got, Mr. Anthony is not in a position that makes him look uh, very makes him look a little bit like a fire hydrant, huh? Let's see if he'll stand up in a minute and let's see if we can see a person. Yeah. You see, he saw it as a person. Now he's behind the tree. And nobody is really cooperating with me today. And he's leaning over again. Let's see if there's anything else out here that we might see. Zoom out. Anthony, why are you hiding behind the tree? I don't think it'll recognize tomatoes. No, the cabbages and the tomatoes are too small. It's not really going to see it. Hope you guys don't mind me just kind of looking around here. See if there's anything that we can recognize. I don't think it would recognize a mango on a tree. It sees it as a fruit, but it probably it thinks it's an apple. It probably was not really trained. Mangoes are not probably the favorite thing that people trained on. Let's look back at the river and see if we can see anything. So guys, let me know. Do you want me to show you how to grab any IP camera stream? I can make a video on that if you want. And also, if you guys are really interested, leave me a comment down below. And maybe ah boat, boat. Okay, zoom. Here we go. We got action. Come on. Come out from behind the tree. I'm sure he's going to come out. There he is. and boat. Boom. And it sees the people in the boat, too. Look at that. Let's see if we can track him. Look at that tracking in action. We see the boat and we can kind of see the people in it as well. Look at that. Saw the bird, too. I saw the bird flying over. Guys, is this amazing? So, let me know if you want me to show you how to like if you have a security camera or any type of uh RTSP camera, a streaming camera, I can show you how you can hook this into and it's just a couple of lines of code to make it work. Okay, guys. The other thing if you want me to see, let's see if we can go back and

Segment 9 (40:00 - 43:00)

look at something as I am wrapping up. Man, it's just usually there's five or six boats out there. Maybe that's a boat, but he's very, very far away, so I don't know if we'll be able to see him. Look at that boat in person. That is pretty amazing. Now, a little bit of a problem. It's focusing on that tree. And that did not help us that it's focusing on the tree. And it will probably take him forever to come back out on this side. So, we don't want to wait for him. We'll see if we can find something else. Anybody else out there today? I do not think so. Now look at that. I just I didn't even see this, but I saw it pop up on the screen. Birds. Oh, bear. It thinks it's a bear. A person. That's kind of You can see that that's a little bit awkward because it's two birds really close together and they're black and so it thinks that it is a bear. Can't decide between a bear and a bird. If they would scoot apart, I think the thing would actually work and it thinks that the leaves are persons right now. So again, you see how it's a very foggy day so we're not getting a lot of contrast. So, this is kind of like a hard pretty hard test we're putting this thing through. Is that a boat? I don't think so. That's just a shoreline. Okay guys, I hope that you enjoyed this lesson and let me know if you want me to do more things like this. You know, normally I'm the guy that does a long series of classes, like a class with a hundred videos, but if you want me to do more of these just quick hit things for you guys that already know how to use the Raspberry Pi, you now know how to load YOLO, install YOLO, make a model of YOLO that operates really quickly in the Raspberry Pi. If you want me to do more of these onehit videos, let me know. The thing is, I could make another video that shows you uh how to get the stream from an IP camera. I could also show you how to without adding an accelerator hat, how to get this thing up running at 30 or 40 frames a second because there's some tricks that I have learned that you can actually make this thing run even faster if you uh you know, if you want solution. Guys, I hope you're having as much fun with uh with this artificial intelligence as I am. Leave some comments down below. Let me know if you want to see more of that type of thing. Also, at this point, I also I always give a big shout out to my Patreons. Without your support, I would not be able to continue to produce fresh new content every single week. So, big thank you to you. You can help me by giving this video a thumbs up and leaving a comment down below. That will help me with the old YouTube juice. If you haven't already, subscribe to the channel. When you do, make sure you ring that bell so you'll get notification when future lessons drop. And most importantly, share this video with other people because the world needs more people thinking like an engineer and fewer people sitting around watching silly cat videos. Paul McCarter with toptechboy. com. I will talk to you guys later.

Другие видео автора — Paul McWhorter

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник