# Tips for creating smarter, not harder in Brand Studio

## Метаданные

- **Канал:** Stability AI 
- **YouTube:** https://www.youtube.com/watch?v=9H0L50QzUl0
- **Дата:** 08.04.2026
- **Длительность:** 6:14
- **Просмотры:** 16

## Описание

This tutorial covers intelligent features in Brand Studio that help you move faster, spend fewer credits, and get better results as you start scaling your creative production.

What you'll learn:

✨ Prompt Enhancer: Let the platform fill in lighting, composition, and technical details automatically. Everything it adds is editable before you generate, so you stay in control.

🧭 Curated Model Routing: Brand Studio automatically selects the best model for your use case based on brand consistency, product accuracy, text rendering, and more. Or turn it off and pick your own.

🔄 Variations Before You Regenerate: If an image is 60% there, don't start over. Use the Variation tool to create alternatives.

💬 Add Text and Finishing Touches: Drag your polished image into the prompt bar and describe what you want added. The platform reads it as context and applies your instructions in one step.

⚡ Ideate with Efficient Models First: Use Stable Diffusion 3.5 to explore directions fast: four images at a time, under ten seconds, lowest credit cost.

🔗 Follow along in Brand Studio → https://bit.ly/4vg00yP

## Содержание

### [0:00](https://www.youtube.com/watch?v=9H0L50QzUl0) Segment 1 (00:00 - 05:00)

So, by this point, you might be pretty familiar with Brand Studio. You probably been using it for some production tasks you might be using for some one-off tasks, but let's talk about some tips and tricks and how you can really get the most out of the platform and talk about some stuff that you might not have played around with yet. Starting with prompt enhancement. Prompting is by far the most difficult part of using generative AI. I don't think that's a secret to anybody. And we've tried to take some of the guesswork out of that for you with the prompt enhancer that's located right here next to the create tool. It's pretty simple. We can go ahead and put in a prompt that is maybe pretty familiar to our use case. So I could say a man sitting at a table eating chips. And I'm going to go ahead and create maybe let's say three images using just this really short prompt. And then while these are generating, I'm going to go ahead and hit the a enhanced prompt button. And what that's going to do is it's going to look at my prompt, try and get the essence of it, and then reconstruct it in a way that has the most detail as possible. So we can see now we've specified the material that the table's made out of. We can say where the bag of chips is located. We can say what the man's hands look like. They're large and callous. Handful of chips. Table with a few scattered chips and crumbs. So, we're actually setting a scene. And now we can try a couple images using this section and see that the difference of a more enhanced prompt looks like. And what's great about this is editable by me before I hit create. I could go through here and change some of these functions. So, say it's eyes are closed, maybe one is actually open, maybe I want to be wearing glasses, but I could then edit the prompt ahead of time. And it's a great jumping off point for inspiration to make sure I'm getting the most detail pop. So, these are the original generations, which have a really short prompt. Again, nothing wrong with these. I can see him. He's eating a bowl. I've got the chip bag. I've got another man here also doing chip bag. This one's outside. Another one on the selection. And then I can see if I go through these ones where it's much more specific to what it is I was trying to prompt for. These are going to be much more in line with a specific aesthetic style because it's adhering much more to a much more detailed prompt. So that's why detailed prompts matter a lot for making sure that you're getting what you're getting. The man consistently looks the same. His eyes are consistently the same. The hands have that kind of cast look. The table is wooden. There's a lot of stuff here in terms of these chips that are great. And then these are serve as jumping off images for me to then be able to use within my workflows. And then we already touched on it briefly, but let's talk about the curated model router. So this is on by default, which is this auto model selection. It's interpreting your prompt and then based on your prompt, it's picking the best model in the background to be able to use it. So you don't have to think about what the best model is. But at this point, you've been using Brand Studio for a while. Maybe some models are better for other things than others. Or maybe you just like the aesthetic look of another model. So you can always just turn the auto model selection off and then be able to pick from any of the models that are inside of Brand Studio. I really am a big fan of the aesthetics of Cream. So I can go ahead and select Cream. And I could probably just go through here and then run this same prompt again and see what it looks like as a difference between that. There's another thing you can do here is I could just swap through all the different models to see. So I can say, "Oh, what does this look like with nano banana? " And then run that same prompt again and then I have a direct comparison point to be able to check what each of the outputs look like from each of the models. So you can see I got my results back from Cream. It's got a much more of this like aesthetic look, something I'm much more of a fan of. It aderes very well to stylistic prompting. I love it a lot. And then you can see that these results are very similar to what our previous results were. probably you would believe that the previous results were awesome from Nana Banana. If you're ever curious as to what model or a image is used, they say, "Hey, I really like this style. " Then you can find that information inside the image details. You go to this little eye icon when you clicked into and properties and then we'll say the model that was used to create it as well as the details around what the prompt you were used and what project it was in all the additional information. Let's talk about some other ways that you can get your final image say very close to where you need it to be. Generative AI is very much a slot machine kind of game in a lot of ways when you're doing edits. And the more options you can give yourself, the better off you'll be. So, let's say I've gotten this image. It's 90% of the way there, 60% of the way there. It's close to where I want it to be in terms of aesthetic style. Maybe I don't want some certain aspects of this. Maybe I don't like this shirt. Maybe I don't want whatever. I can go through and I can do variations. Maybe I'm not even 100% sure what it is I don't want that's wrong with this image. But, if you haven't already played with the variation tool, this is where it can become quite strong. So, you're going to go to the edit option and you're going to hit variations. And you can see that it's already pre-loaded in the description, which is the prompt for the man. But I just want to change how much this varies from this. So variation strength is like how much do you want this image to be like reimagined a little bit. I'm going to keep it say 30%. And I'm going to again try and up as many options as I can. I'm going to hit variations. I want to just see what this outputs. And the way this works is it is interpreting the style of the image as well as the structure of the image so that my variations will never adhere or never stray too far from the composition of the image itself. it always will just get me like some small changes. So this is my base image and we can start walking through the changes. This is the next one. You see his face is altered. The chips are altered a little bit. Next one over again. Face alter. We're starting to look at some like really like small changes again cuz the variation strength was too big. Crumbs are a little bit different. And you can see an example of this is if I turn that variation strength up really high and do create image. The outputs I'm getting are really starting to really change from that base output. I'm getting all the structure still there. The man is now significantly changed. We are getting outputs that are much more different. So, it's a great way of tweaking even further. Material in the background is starting to change. He has a wedding ring now. Congratulations. There's a lot more you can do here with the variation tool. And if you crank this thing all the way up, you can start to get some truly wacky results that are going to be out there and really make you think about things in a different way. And then one of the last ways that you can get the most out of your brand studio subscription that I'm going to talk about today is how you can do the

### [5:00](https://www.youtube.com/watch?v=9H0L50QzUl0&t=300s) Segment 2 (05:00 - 06:00)

most to ideate. Again, it is no secret that Jouri requires as many swings of that slot machine as possible to get you the best results. And a lot of that can be around finding the right prompt. And maybe you don't want to spend as many credits as you possibly can, or maybe you were trying to conserve as many credits as you can to be able to do as many spins as you possibly can. So sometimes it's best to use a cheaper model to try and refine your prompt section to make sure that's right before swapping over to a more expensive model to get the best quality results. Great way of doing that is you just go to your advanced settings. You turn off auto model routing and select a the cheapest model in here, which is going to be stable diffusion 3. 5 model suite. And this one is just like a really low credit cost. It's generates very quickly. The idea here being is that it's very easy to check different sections. So I could say what does it look like to have a man eating chips next to a tiger. And then I can just load four images, run the thing, don't really care too much about what aspect ratios are. And I can use this as a very quick ideation tool that once I now have a prompt that I really like, I can then go back through and use different models that might be something that would be more fitting. So you can see I'm getting some outputs here. Again, very low quality because they're meant to be low quality. They're just meant to be fast. So, it's a great way to just get stuff quickly out of the tool itself. All right, great.

---
*Источник: https://ekstraktznaniy.ru/video/46002*