How To Use Google AI Studio For Beginners (2025 Tutorial Updated)
17:08

How To Use Google AI Studio For Beginners (2025 Tutorial Updated)

TheAIGRID 09.05.2025 41 854 просмотров 907 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
How To Use Google AI Studio For Beginners (2025 Tutorial Updated) Join my AI Academy - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ 00:00 — Powerful AI Tool 01:00 — Choosing Best Models 02:30 — Advanced Settings Tips 03:45 — Google Search Grounding 05:00 — Retry + Thought Mode 06:00 — System Instructions Demo 07:15 — Safety Filters Revealed 08:00 — Prompt Gallery Explained 09:00 — Visual File Analysis 10:30 — YouTube vs Transcript 11:30 — Live AI Streaming 12:45 — Real-Time Doc Help 13:45 — Video Gen Showcase 15:00 — Starter Apps Tour 16:15 — Build Your Own Links From Todays Video: Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (15 сегментов)

Powerful AI Tool

So, Google AI Studio is largely one of the most powerful AI tools built by Google to allow you to access the full suite of Google tools. And most people don't realize just how powerful this is. So, in this tutorial, I'll guide you guys step by step on how to take advantage of this incredible studio. So, one of the first things you're going to see is that there are four key areas. You've got chat, stream, video gen, and starter apps. Firstly, I'm going to focus on the chat area. So, with the chat area, firstly, you're going to want to pick a model. Interestingly, Google offers many different models that you may want to test. Now, considering there are so many models, if you quickly want to understand just exactly what the model is offering, hover over that model and then in the bottom right, you'll see it will tell you exactly what the model is best for. For example, Gemini 2. 5 Pro Preview is best for coding, reasoning, and multimodel understanding. And if you want to know the current use cases, it'll tell you that it is to reason over complex problems, tackle difficult code, math and stem problems and use the long context for analyzing large data sets, code bases or documents. It will also tell you when the knowledge cutoff is

Choosing Best Models

there. Now, what's also interesting is that it will also tell you the rate limits. If you are going to be using the AI studio for free, if you go down here, it will show you exactly how many free requests you do have. So, it actually shows you five requests per minute or 25 requests per day. So, if you input 25 things into this chat, you're going to only be limited to just 25 per day. As overall, if you want to be talking more than that, it's going to eventually cost. Once again, if we hover over other models, you'll see that they have different rate limits and different best use cases. The Gemini series of models are essentially just normal models that sometimes have thinking modes that tend to reason a lot longer. But when we start to look at the Gemma models that you may want to talk with, the Gemma models are essentially much smaller models that are really for developers that want to test ondevice capabilities because this is likely where the models are going to be housed. So these are really small models that don't really have much use for the average person. And in the other section here, we have the learn 2. 0 models. These are models that are designed to teach you subjects, break down complex topics, and engage in active teaching. So now that you know exactly what models you can talk with, let's actually look at some other key features that most people don't know. So, one of the key things as well that is really useful is before you even get into a chat, you always want to make sure that you enable specific tools. So, for example, if we click this drop-own menu and we go to advanced settings here, we can change different things such as the output length. Sometimes the output of the conversation is going to be rather long. So, if we change this output length and make it lower, it's actually going to make the model output less tokens in each request. Additionally, if we scroll down to tools

Advanced Settings Tips

as well, one of the things that most people do miss is this called grounding with Google search. So basically, what it's now going to do is anytime you ask it a question, it's actually going to ground that in Google search. So let's say for example, I say who won the recent boxing match, the one with Connor Ben and Chris Eubank. This was a rather popular boxing match. You can see right here, considering that I've now grounded this with Google search, it's actually going to think because this is a thinking model. And then now you can see it's actually managed to give me additional sources because this is all grounded in Google search. This is going to give me a much richer response with upto-date information. Most people don't realize you can do this and it's a neat trick that I continue to use when I'm using Google AI Studio. As always, if you don't like the final output, you can click this Gemini button right here and this is basically going to rerun your request. Sometimes maybe the model might hallucinate and that's going to give you guys a quick button that retries it. Of course, if you do want to think about why the model either got it correct or it got it wrong, what you can do is you can open up the thoughts in this experimental mode, then you can walk through the reasoning steps the model took. The reason you may want to do this is because sometimes the model reaches a conclusion that you aren't sure why it got wrong. Here you can see the step-by-step process and how it eventually got to the final step and that'll allow you to realize maybe your

Google Search Grounding

question wasn't clear enough or maybe the model took information from a source that it wasn't supposed to. Now, we also have structured outputs, which is where the AI gives answers in a neat format like a checklist or a table instead of plain text. You can turn this on and edit it, but since this is a beginner video, I won't go into the details of that. We also have code execution. This is where it allows the AI to run code like solving a math problem or generating a chart. And then, of course, we also have function calling, which is where you can let AI connect to other tools or apps to get things done for you. This is mainly for developers, but like I said, it's a beginner video, so I won't dive into it. Most people don't also realize that what you can actually do as well is that you can actually compare different models. So if you really want to just test out which models are going to respond differently, you can hit this button called compare mode and then we can select different models to test different prompts against. So if you want to both ask them different things. For example, I said write an email that convinces Sam Alman to invest into my AI agent start. You can see what I can do is I can test out different models. So I can just Gemini 2. 5 for a flash. Then I can just maybe like the old models like 1. 5 flash and then I put run. I'm actually going to see two different outputs here. This is really useful for you if you're trying to understand which Google models are best for which task. And having them side by side is just immediately valuable because it allows you to see the differences and the nuances

Retry + Thought Mode

immediately rather than continuing to compare going back and forth. We can see side by side exactly what goes on. Of course, in this example, we can see that Gemini 2. 5 Flash is way better than 1. 5 Flash. This one spent a lot of time thinking. It had a lot more details, key things that the email does. It tells me before sending what I should do. So overall, this is going to be really useful. As remember in the beginning, I did say that there are other models down here that you can test. This is of course where many people do take advantage of this. Now, what you can also do is something that I've seen people do quite a lot in the Google AI studio is use system instructions. This is pretty simplistic if you're with AI, but this is basically an optional tone and instructions for the model. So here I could say you are a blunt business adviser and I said giving people raw advice for business based on fundamentals in books. So now if I ask it question like how do I start the right AI startup it's actually going to use this as context. So now all I need to do since I got my system instructions here is I just click run and it's now

System Instructions Demo

going to think about the fact that it is a blunt persona and you can see right here it says identify my persona. I'm a blunt business adviser giving raw advice based on fundamentals from books. So you can see right here it says direct nons language and it's actually leaning into that personality and this is really good if you want to test out a model in specific scenarios. So right here you can see here's a raw truth based on real raw data and of course business is very real and it's very raw. You do want the raw data as quick as possible. So you can see right here I'm able to get that data out immediately and process it. And this is something that once again is super useful because your system prompt defines how the model's going to act. So rather than starting every single chat with a system prompt saying act as this person, you can just put this in at the top. Another thing that's really cool about this for more granularity is that you can enable thinking mode if you would like. And then of course you can set a thinking budget. So if you want the model to think really long about everything, you can increase that quite a ton. And if you don't want it to think really long, you can set the thinking budget to a really minimal amount. This is really useful for when you have different queries that are based on different levels of different thinking. So this is something that I personally use when I want it to reason for longer or shorter depending on my initial query. Another amazing tip that you guys can actually use and I think this one is really useful because it goes to show that Google is a lot more lenient than

Safety Filters Revealed

other AI models is this safety settings feature. Most people don't realize that if you click edit you can basically turn on or off different safety features. So, of course, most models do restrict harmful hate, harassment, and sexually explicit content, but in the AI studio, these are initially turned down. So, you're basically going to see the raw Google model. You can see right here, if you want to run these safety filters on or off, you can. But, of course, if you want more dangerous content, then of course, you can turn it on. And this is essentially allowing you to strip back parts of the model so you can interact with the model in a more raw way. Of course, this is something that is a tool for developers that, you know, want to deploy the applications. and of course want to have models that are able to do a little bit more things. So I just find that this is something that is also really useful. Now another thing that most people don't realize is that there

Prompt Gallery Explained

is also a prompt gallery right here. So if you clicked prompt gallery, this is basically something that is going to immediately going to open up a large array of different prompts. It's basically ideas that can just get you started and basically try to show you the capabilities that Google AI Studio has. If you want a much more visually easier way to understand it, head on over to this website, which of course I'll leave a link to in the description, but this basically just has all of the prompts organized so that you know it's a little bit more visually easier because on the previous slide, it was literally just text on text. This allows you to easily identify which prompts you may want to test out and see how good they are compared to other models. Now, Google's standout feature is of course the long context window. So, what's really cool about this is you can insert assets such as images, videos, folds, fighters, or audio. So now let's take a look at how you can actually import files into Google AI Studio because this is by far one of the most powerful capabilities when it comes to using these models. If you click the plus icon, you'll see that there are six different things that you can immediately add to the model. Now why this is so useful is because it allows a long context window to analyze your files. So one of the really cool ones is

Visual File Analysis

that you can immediately analyze YouTube videos. Now unfortunately the way they analyze YouTube videos is a bit different than you might think. It's actually a visual analysis of the videos. So let's say for example I take this video URL of a car speeding down a high street. I can add to this prompt. I can add simply what color was the Porsche. So now it's going to actually watch the video and then it's going to tell me exactly what it saw. So you can see right here it says analyze the request. Watch the video from beginning to end and then it says I can see the Porsche appear starting around 4 seconds. Then observe the car's exterior as it drives by. And then of course it gives me the final answer. Now of course if there is a longer video in which you need to visually identify something, this can become remarkably useful. However, if you do want to analyze transcripts, this is actually better saved for Google Gemini's original app. So, if you actually go on over to Gemini and just click at YouTube, this one actually will allow you to analyze different transcripts of different videos. So, I can take a video on AI here, and I can basically say summarize this video. And so, I know this isn't Google AI Studio, but for those of you who quickly wanted that solution, this is exactly how you do that. So, this one actually analyzes the transcripts, and this one visually analyzes the video. So, if you needed any visual analysis, this is of course what you'd want to use. Of course, what you can do, let's say you don't want to upload your own files, you can just click sample media, and here you'll see a bunch of different things that you can interact with. You'll see a 10-minute clip right here, a 44-minute full movie in which you can ask videos about different things that happened. Museum of Natural History, the tour of 10 minutes, and then you can interact with that video to just see exactly how good the model is watching videos. So, this is basically what you

YouTube vs Transcript

want to use if you want to quickly analyze stuff without uploading your own files. It's really, really useful, especially if you want to analyze different audios. And you can go back and forth with the model to ask it different questions about that thing. So here I've actually uploaded an interview with Samwin. This is basically just the audio transcript. And what I'm going to do here is I'm just going to ask the model. them what was the most insightful thing. And then I'm going to go ahead and click run. So this was a 36-inute interview right here. It's got the full audio. This is 70,000 tokens. And I've just put what was the most insightful thing/inssight about the future of AI. So now it's just going to take some time to reason about the future. I've also selected Gemini 2. 5 Flash. So this one is rather quick. So now you can see right here it's gone and it's actually listened to this entire podcast and it's immediately given me the major insights. This is something that is really useful if you have a long audio file that you want to basically quickly analyze. You can see right here that it managed to do this really well. Next, let's go on to the streaming section of the video. So essentially what this is this is basically where it can watch your screen and it can essentially act as a digital

Live AI Streaming

co-pilot. So now let's use the Gemini Live feature to share screens. This is where you can literally stream your desktop to an AI that's going to be watching your every move. It's pretty useful if you're browsing things and maybe you need a quick explanation. Let me show you guys a firsthand demo of where this is really useful. Hey Gemini, can you see my screen? Yes, I can see your screen. Okay, so I'm reading this paper, but what is it about anyways? The paper is titled Vending Bench, a benchmark for long-term coherence of autonomous agents. In this paper, the authors present Vending Bench, a simulated environment designed to test an LLM based agents ability to manage a straightforward, longrunning business scenario, operating a vending machine. Is there anything in particular about the paper that you'd like to discuss? Uh, yeah. What does this bit mean here? I don't really get it. Can you explain it in as simple as possible? I can certainly help. Are you referring to the diagram of figure one, overview of vending bench? Yeah, figure one. I don't really know what that means. Okay, so figure one illustrates the setup of the vending bench environment. The main agent has access to remote tools such as sending emails, using the internet, and checking its money balance. The main agent can also use vending machine tools like restocking the machine, setting prices, viewing inventory, and

Real-Time Doc Help

collecting cash, and can run or chat with a sub agent to manage these tasks. Does that make sense? Yeah. So, what's the overall goal of this, I'm guessing, then? And how much money did these guys make? If I understand correctly, you're asking about the overall goal of the models in the study and how much money they made. Is that right? Yeah, that's correct. Vending bench experiment and how much money they made. So, right there, you could see that I basically can use this when reading complex documents to have like an AI assist me break down complex topics in real time. This is something that is remarkably useful if you're, you know, reading legal documents, if you're reading something that's complex, or maybe you're even using a new piece of software that you don't really know how to use. This is something that's really, really useful, especially for beginners, especially if you're watching this video. Now, of course, Google's AI Studio does have the video gen feature, and this is really cool. We initially have these videos, which we can initially one-click, and it's going to generate videos exactly like that. Now, currently, there are limited options right here. So, we only have V2. Of course, in the future, we'll get V3. The

Video Gen Showcase

only results we have is two. Currently, they don't state how many V2 generations we do have, but I wouldn't expect it to be more than either four or five. So, so please be careful with how many VO videos you generate because on many websites, VO is probably one of the most expensive ones in terms of credits. But if you do want to generate a video, you can either a just add a text prompt or you can actually get an image. You can literally upload an image have you know bring on from the sample media. For example, I can take this sea turtle right here and I can say make the and I can put literally make the turtle swim. Now, of course, when it comes to the aspect ratio, you can change this for mobile or desktop. There isn't a one aspect ratio for things like Instagram. Of course, you can change the video duration, which is really cool. And you can also add a negative prompt. So, this is basically things you don't want to see. So, if there's anything that is what you wouldn't want in the video, this is where you would define that. Now, because I've already added an image, it's going to not really help that much. But if I just click run, it's actually going to return this result in a moment. And so now you can see the video is here. I just click play and it's generated. And it actually looks absolutely incredible. V2 is definitely one of the standout models. I mean, I can actually make this in a larger view so you guys can see it a little bit better. But yeah, that video actually

Starter Apps Tour

looks completely realistic by any standards. I mean, all of the shadows and stuff, the lighting, I mean, it looks really cool. Now, what's cool about this as well is that you can literally just export this to your Google Drive if you need to. Really useful. And of course, you can just download this as well. Immediately download this in an MP4 format. So, once again, Video Gen is super useful, especially if you just want to quickly test out something. Now, Google AI Studio definitely has some really cool features. So, they have these starter apps right here, which are basically areas that allow you to explore with code and the powerful AI behind Google AI Studio. For example, they've got this thing like so you can instantly create flash cards. So if you just click it, it's immediately going to go into a little app that basically is able to do something with the AI technology. So this is a flash card generator and what I'm going to do is it AI terms and I'm going to put generate flash card. So right now it's using the Google AI largely an LLM and probably coding up these flash cards in a particular format and these ways are ways that you can you know explore what is possible with the AI itself. So all of these you know small apps that are initially built here you can rebuild these yourself by using the AI studio. So right here you can see it's givingven me some flash cards ML machine learning a type of AI that allows systems to yada yada. So here we have these flash cards where you know of course if we want to learn exactly what things mean.

Build Your Own

This is basically where we would just you know flip them over know exactly what's there and I mean it's just really useful for learning. Of course when we go back to the starter apps page we can see that there are many different starter apps that we can use. And of course, if we want to create an app, we can basically pick a template. So, there's many different things here. We've got a prompt, we've got live, we've got chat example, we've got embeddings, we've got many different ways. Of course, as a beginner, I'm not going to go too deep on every single one since it is a little bit more advanced, but basically, this is where you'd want to get a little bit more into that advanced area where you'd want to explore how to build things that people could really use. So, this something that is really, really useful. Now, overall, if you enjoyed this, don't forget to check out your history. You'll be able to see all of your chats. go back on them. You'll be able to get the information and check over everything if you wish. And hopefully this video helped you out understand completely how to use Google AI Studio. I think it's only going to get better from

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник