David is the Head of AI Product at Figma and gave me an exclusive demo of Figma Make, Figma’s new AI prototyping tool. We had a great chat about how Make stands out from other tools and how it works under the hood, including a deep dive on evals.
This episode is brought to you by Merge — Merge gives SaaS companies like Ramp and Drata a single API to launch over 200 product integrations fast. Book a meeting via https://www.merge.dev/peteryang and get a $50 Amazon gift card when you attend.
Timestamps:
(00:00) Building the best design-to-prototype tool in the market
(03:17) Demo: From static image to interactive solar system
(06:05) 3 ways that Make stands out from other prototyping tools
(12:01) How Make actually works behind the scenes
(15:21) 4 types of evals to improve Make's AI prototypes
(17:40) How the "Great Bakeoff" transformed the product
(23:29) The biggest product challenges in building Make
(27:23) Why prototypes are now the gold standard for design
(35:07) How Figma learned from its past AI mistakes
(40:36) Demo: Drawing apps, games, and more with Make
Where to find David:
LinkedIn: https://www.linkedin.com/in/davidkossnick/
Website: https://www.figma.com/make/
Get the takeaways: https://creatoreconomy.so/p/figmas-make-ai-prototyping-tool-is-here
📌 Subscribe to this channel – more interviews coming soon!
Building the best design-to-prototype tool in the market
Figma make is a prototyping tool that can start with a simple prompt with a screenshot or with mock-ups. Seen a bunch of really cool cases like teams building out their specific workflows for their marketing idea campaigns using public APIs to figure out their next train near them in a form factor that they like most. And so we think we're going to be the best place for starting from designs to get something really highfidelity that is usable and functional out of the game. Alpha testers for Figma Make is my six-year-old son. He's made like 10 games in Figma Make. He comes home from school with sketches of ideas and he's like, "Can I give this picture to make and start making the game? " And what was the most challenging part about building this product? Okay, welcome everyone. My guest today is David, the head of product for Figma AI. Uh Figma just launched make it brand new AI prototyping tool. So really excited to get David to demo how to use make discuss what it took to build the product and his top lessons after building AI products at Figma, KOD and Google for a decade. Welcome David. Thanks Peter. So great to see you. All right man. So super exciting. Uh I've been playing with make. It's it's a really amazing product for prototyping and can you just start by showing us how it works and showing some examples? Sounds great. Figma make is a prototyping tool that can start with a simple prompt with a screenshot or with mockups. Is designed initially for designers and PMs and engineers to bring ideas to life and to start with concepts that come from Figma and more and get those into something you can play with and feel. And so just to give you a quick run through, um we'll start with just a textbased prompt here, a modern image gallery, um where you can tap around on different photos and transition. And this will take a couple minutes and you'll see it's starting to uh reason about what the scope might include. Um so here's how I'm going to plan to do it. Create a gallery component that displays images in a grid. These are the types of things I'll need. And then I'll describe the work it's about to go do. And as it starts generating code under the hood, you'll see that stream in here. And so as part of this, Figma built for the first time our own native code composer uh complete with syntax highlighting, code autocomplete, uh error console, and more. Um and you'll be able to go in at the end and actually edit that code directly. So here we go. That one was pretty fast. Um let's see how it did. So pretty decent. Um you have a sense of a starting point for this. Um, that's awesome. Hop on over, look at the code. Um, and actually one cool feature we I'll call this the pointer here. You could go ahead and click on a component and then hop over to the corresponding part in code as well to go and look back and forth on how it works. I wonder where it's getting the images from. Is this getting from somewhere like unsplash? Yeah. So, uh, there's a number of ways it can generate image. The most common one is uh, via unsplashed. And so we found for placeholder imagery especially to get an idea across that's a really helpful and very fast starting point.
Demo: From static image to interactive solar system
Awesome. Uh show some more demos, man. What else you got? Cool. So yeah, let's do another one. So just starting from an image here. Um I'm going to take a picture of a solar system and say uh make this interactive. Um and this one's a bit more uh complex. So, I'm just going to switch over because it might take a little bit longer to do this one. Um, to show what happened when I finished running it. Um, so you can see here again, it went ahead and reasoned through it, started generating, wrote the code, told me what it did, and you have this inline immediately playable implementation. And so I can try clicking around here. I can see that the planets are rotating. And like this is kind of amazing, especially for someone who may not know engineering skills or even who does, but wiring this all up could take a lot of time. I suddenly have, you know, a much better sense of how this is going to look and feel based on just the pure visual simple prompt. Yeah, this is amazing. Like I can just teach my kids how a solar system works. It's great. Yeah. Um and of course Oh, sorry. Oh, yeah. I was just saying like so that's just came from a static image, right? That's right. Yeah. So that was just uploading a PNG here. Okay. Keep Yeah. And of course, you know, we are Figma. uh we're for product designers of digital products as our primary persona and scenario. And so of course we've done a lot of work to make that really shine. Um so if I hop over to a more normal Figma file, you can see here I have a whole bunch of components and layers. I'm just going to copy this thing and come back to a new make and I'm going to paste this in. I'll say uh make this interactive um including the dashboards. Um and again I'm just going to hop over to uh what happened after it finished running. Um and what you can see is like again I have a pretty functional prototype. Um let's try clicking around. It says a achievements show and hide all these edit profile. Um, yeah, that's actually pretty solid. That's amazing. I I think you're understanding it, man. I think this is actually really amazing. Like, so you actually copy the Figma frame, right? Or like the artboard. Is that what you did? That's right. So yeah, under the hood, what happens when you bring things in from Figma, which you can also do over here, um, pulling things in from your team or from the community. Uh, we're not just bringing imagery, but we're bringing all the rich structured data that Figma has. It's like all the layers, all the metadata, all the exact details of styling and every configuration. Um, and this really allows the translation from design to code to shine. And so we think we're going to be the best place for starting from designs to get something really high fidelity that is usable and functional out of the game. C, can you
3 ways that Make stands out from other prototyping tools
pull up the image again? This episode is brought to you by Merge. Product leaders cringe when they hear the word integration. They're not fun for you to scope, build, launch, or maintain. And integrations probably aren't what led you to product work in the first place. Luckily, the folks at Merge are obsessed with integrations. Their single API helps SAS companies launch over 200 product integrations in weeks, not quarters. Think of Merge like Plaid, but for everything B2B SAS. Organizations like ramp, drata and electric use merge to access their customers accounting data to reconcile bill payments file storage data to create searchable databases in their product or his data to autoprovision access for their customers employees. And yes, if you need AI ready data for your SAS product, then merge is the fastest way to get it. So if you want to solve your company's integration dilemma once and for all, book a meeting at merge. dev/peryang dev/peryang and receive a $50 Amazon gift card when you attend. Now, back to the episode real quick. Yeah, dude. It looks exactly like the image, man. Yeah. I mean, look, so set expectations. You know, it's AI. These are nondeterministic systems. You know, you'll definitely have cases where it'll get confused or fall over. And we are really sweating the details on quality. Uh but also, we have a lot of context to bring to this problem. uh a lot of context about the data set about the data sources and so we've really excited to make it much easier to start from a design and bring something to life. So I think this is kind of partially the answer already but I have to ask you like you know there's a lot of prototyping tools out there now right with AI. So what makes uh what are like top three benefits of top three unique value props for make? Yeah a couple I mean first off we're just getting started here so this is an early beta. We're really excited to get a lot of feedback and we're really excited about this space. Um, but for where we've started, you know, the first is being the best place to start from design. And so again, we know a lot from the Figma platform, from our other products on the context that you're bringing in. Um, and we think it'll be really, really great to start from a mockup or a sketch and go to something that's playable here. Um, I think the second one is Figma is a platform where all of our products are multiplayer first. And this is part of why Figma's been so successful. And so I'd actually love to uh show this to you now. Um, if you open up this file, I shared this with you. Uh, you can come on in here and make something with me. So you'll enter the same file. You know, my avatar is here in the top right. I had a chat. Um, and yeah, you'll be able to see the changes update in real time as I do. Let me uh do you share with me? Um, yeah. Here, hold on one sec. Let me switch windows. Is there a chat in Riverside as well? Yes. Here, I'll put it in the Riverside chat also. Studio chat. Okay. Uh, let me let me find I never use the Riverside chat. Oh, there it is. Okay, let me open this. All right, I'm eagerly waiting for this to load. Okay, so I see your uh I see your face up there. So, one thing you can do um is open up this pointing tool. And you'll notice, you know, we are Figma. We're known for direct visual control. You'll be able to go ahead and make changes. So, it would love for you to make some visual changes and I should see it update real time as well. Uh, okay. Let me u Okay, I made it red. Okay, there we go. Nice. And I can uh prompt the AI to on the left and yes prompt. So, you can start chatting. You'll see your face show up there. And uh you know, another cool thing is um so let's go down to the dashboard here. You'll notice the hover state's a little bit weird, like why is there that kind of gray background? So, we can go ahead and select this element. Um, and you'll see this toolbar here. And there's a part of this toolbar that lets you basically do a contextual prompt. So, you'll see this says div. And div shows up in the prompt box. So, I could say uh make the hover state for this um uh not change the background color and instead animate the bars. Um, and so you'll see that prompt show up in your screen as well. Um, and then the other really cool multiplayer feature is, uh, you know, if you select an element, you can also hop on over to the code view to see exactly that piece of code. Oh, looks like this one's getting rewritten right now. Um, but once you're in the code view, it's also a multiplayer code editor. And so you can go ahead and update that code live and I'll see that updated on my end as well. Wow, dude. This should be a V1. You're making us look bad, man. This is supposed to be a V1 product. You have like a this is pretty polished and how pretty featurerich already. You're very kind, Peter. Uh yeah, we're really excited. I think, you know, we're just getting started here. Let's see how it did on this. Oh, updated the static bars. The hover bars grow slightly on hover state. Interesting. So, it didn't quite get it. You know, you'll see. Uh the interesting thing about when it does fail is you can often you know work with it to get it to a place where it's better or closer. And of course you can always go directly into um you know the changes as well. But you know these things are not foolproof. And part of why we're in beta 2 is we're really eager for people's feedback. So you'll see thumbs up thumbs down and we're looking for you know lots more feedback on which specific example cases work and don't. uh which we can talk about more our eval loop but is a really important part of our process on quality as well. Yeah. Let's talk a little bit
about uh building uh like a AI producting product like um like maybe you know you don't have to review a secret sauce but like how does it actually work behind the scenes at a high level like when I submit a prompt like you know make this interactive it sounds like just like a planning step you prompt it to plan out make a spec first is that yeah how does that Yeah. So, you know, we're trying to help explain some of what happens in the response from the AI itself. So, you'll see in reasoning, you know, the first thing it does is it sort of works through itself what scope should be. Um, and then it as it comes to a decision there, it starts getting its execution. It gives you a readout of like, all right, I'm going to get started on these things now. Um, and then, you know, we've also invested a lot in code streaming. These things can take a while, can take a couple minutes uh for it to generate all the code. And so being able to understand what's happening even while it's going is really important as part of the user experience. And so we've invested a lot in the you know latency and user experience around code streaming to understand like okay it's working on one file and it's going ahead and generating it in full or no actually the follower exists and it seems like it's just editing one part of it. Um and so it's kind of making some decisions based on your starting point which could be an existing codebase could be you know structured data of a Figma mockup you've brought into that prompt. U or an image or whatnot to decide what to do. Got it. Yeah. Like because it could take like a few minutes, right? So showing that it's doing the work is good for the user to understand. Yeah, for sure. Um and um do you have any uh there's like a bunch of vibe coding tips out there like do you try to bake the stuff into the prompt itself? Well, I guess one of them is like to build a plan first, but like you know, do you have like some tips around like, hey, use this framework or like you know, you use this stuff like or Yeah, it's funny. You know, I feel like vibe coding has a version of that now and like prompting has had one for a while as well. Like, you know, in the more general prompting version, there's been memes like offer to tip the AI or like tell it to do it faster or like threaten it that if it doesn't do a good job, you know, something bad's going to happen. like there's a mini version of that happening in vibe coding as well. Um you know I think the reality on which those are like actually meaningful or helpful is moving really quickly and models are constantly changing what's baked into at the product layer is changing a lot. You know, we're doing a ton of iteration on our side with the goal of users not really having to think about it. And like what we find most users is, you know, they want to come in with a high level idea and get some sketches and get some stuff to like play with and get a sense of if it's what is what they want and then refine from there or they come in with a very specific idea like a mockup and they want to feel it and see what corners they haven't thought around and how that should influence the approach. And so we don't want users to have to carry the burden of like knowing what the latest best practices are, how to exactly phrase things. Like we want a system that's a bit more flexible and sort of simple to understand. Okay. So um you know like we've been talking about building AI products before like in practice a lot of it is just like you said a lot of is iteration like maybe you're in some notebook just like trying to edit the prompt and then a large part of that is obviously eval right and um maybe you can give us some idea of like the types of evals you have for this product. Yeah. EVOS has been a really interesting one for this project. Um, yeah, we
this for this project. Um, yeah, we realized pretty early on there were actually a couple different axes for quality for users. And so we have two scores today. We have a design score and a functionality score and they're somewhat orthogonal. Like you can actually have cases where it gets the functionality completely right and it looks totally wrong relative to what the user wanted and there is still legitimate user value in that. And so I think the other thing that we learned is you know there are even though we're targeting designers as the hero persona like there are different subpersonas and different use cases and scenarios where the design score versus the functionality score matters really differently and it's like we've been had a you know a testing group which uh you've been a part of. Thank you for all your feedback. Has been really interesting to see like how many internal tools cases there are where like the functionality really matters or how many games there are where like the feel matters much more than like necessarily all the nuances. Um and so I think those that was one early insight which is like there's two scores and the bars for those scores from users vary a lot by persona and scenario. And so it's not like there's one number you have to hit everywhere. It's like there's different numbers you have to hit on different axes for different demographics and targets. Okay. So you basically just have uh real people actually give get give the scores like real Figma employees or like the beta testers. Yeah. So that we've done a couple ways to get the scores themselves. Um, so you know those we started initially with the project started initially with like a prototype and it was fairly basic and we did a bunch of internal testing and the question was like how good how far away is quality from a point of viability with this and so we had like a giant fig jam board in the very first eval we had the whole AI design team spend hours just going through and trying prompts and recording like input prompt here's my file here's the screenshot here's the design score functionality score and my notes And you know it was a score of one out of four for uh design and for functionality. Four being like perfect, three being like good, two being like not that good, one being like absolutely terrible. Um and from there, you know, we started building it. We felt like, okay, there's a path to making this viable. We expanded that same process actually to the whole company and it was pretty
wild. I think it was the most aggressive sort of widespread pervasive dog fooding effort at Figma. We called it the great bakeoff. Um, and we had in-person sessions, like 12 of them across time zones for all of our global employees where people would meet up and like jam live sort of like group vibe coding sessions and like use this same kind of big jams again fairly early in the project in a somewhat structured way to like here's the exact things, here's what worked, what didn't. Like we didn't even have thumbs up and thumbs down built into the product at that point. And so we were kind of scrappily using Fig Jam to like get feedback very quickly. And that was so helpful. Like that became like the first thousand items in our eval. We're like these are the great cases people love. We should not lose these. Like here's the most frustrating cases people want improvement on. And like we took those verbatims and it was like a very scrappy and very helpful process. Um and then from there, you know, we do have a team of contractors we work with um in order to scale up a bunch of the human judgment piece to have a faster feedback loop. Um, and so that was really helpful sort of canonicalizing a bunch of these data sets before we even kind of were rolled out to more users. Um, like what are some examples of stuff in the eval set like you know like did it run right? That's one of them. What are examples of sorry like what are some examples of stuff that's in the eval set like the eval criteria like you know got Yeah. I mean there's there's interesting cases and so like we've had cases where um you know make me this dashboard like I showed you something would be visually right and like nothing would work at all and you'd be like make it interactive like add the hover states and like literally nothing happens and you're like it's nailing the design like something is really broken in functionality and like we're not sure why. Got it. So you'd have cases like that. Um, and then you'd also have cases, uh, you know, that was that were super interesting where you give it a very ambiguous prompt and there are many different right answers that have trade-offs to them. Um I think that was all another interesting piece is like having designers and PMs very involved in the eval set is really important like it is product design process of like when you have an ambiguous prompt like this these are like reasonable answers and these are not and I think there is a lot of like human judgment involved in like yeah make me a um make me a micro site for my team meeting up and it's like wow there's a lot of ways that could go. Yeah. Unfortunately, people don't necessarily give AI specific instructions all the time. So, for sure and I mean there's an interesting question on like you know how what is the back and forth to understand user intent better as well. And do you have so you have like basically you got the entire company rally to you know dog talk dog for this but do you also have like some sort of AI as a judge like syn synthetic eval or is it too complicated for this product? Yeah, great question. So there's actually four types of evals. Probably should have started with this. Um there's actually four types of eval. Uh one is deterministic evaluations. And so you know we also have an AI feature that's been live for a while called AI text tools inside of Figma design. So if you have a text node, you can ask AI to write stuff. You can ask it to shorten text. And like asking AI to shorten text is a deterministic uh capability. Like you can look at the output and see did it shorten it. And like we've had bugs for a while like if you run it on like a sentence it'll turn into a paragraph. like that is a you know very measurable very repeatable logical error. Similarly there's no thing right. Yeah. Yeah. Exactly. And similarly for code generation like if you generate code that does not compile that is like a logical error. Um so those are that's the easiest form of eval is like is it correct or not. I think the second type is qualitative. I think there's both like humans on your team as well as like a scaled version of that with contractors and like you know uh before geni I used to work at Google search has like a 100page public document describing how search raers should evaluate a search results page on like what is good or bad quality and you should like click on each page and look at different things about it. They have like thousands, tens of thousands of people trained on this hundpage book on how to evaluate things with good judgment. And so I think like every product team right now is kind of developing their own book of like okay if I'm going to evaluate stuff like only have so much time. It's like how do we make that go faster? And I think on the one hand you have humans do it. On the other hand you could have AI do it. And so you know is that 100page book basically a really long prompt now to AI as a judge? I think for some problems that's more viable and like yes you can actually get as a judge to work pretty consistently and be fairly aligned with human judgment. I think for others it's harder and I think you know in our domain of design uh there are really interesting cases where like even designers will disagree on like what the right output is. It's like how can you expect an AI to consistently do a good job if a bunch of humans have you know debate what is like a reasonable behavior in that case. Um, and so I think it can be helpful for like order of magnitude changes on like something really broke this time. U but I think is often pretty nuanced for like is this actually really right or the going in exactly the right direction. It can't do like taste, right? You have to get humans to do the you know that's right. Yeah. And then the final form of eval is like usage and so you know we're not live we're now live with Figma make um but AB tests are really helpful to understand you know with different approaches like how are users behaving differently. Got it. And what was the
most challenging part about building this product? I mean you probably been working on for a while given how polished it is. So yeah I mean there there's been layers of challenge I'd say. So the first has been infrastructure. So um Figma make came out of a project called code layers and so we have a whole project called sites you know we're shipping a new sites product at config I guess we just shipped and sites has been a long big effort and it involves totally different rendering technology than Figma design um and involves converting diff you know Figma nodes into renderable publishable things there's a ton of latency scalability work. And so that's been like a very long arc on bringing uh Figma primitives and code closer together. And out of that foundation came code layers, which is like, well, what if we let you write React on canvas and publish it inside of sites? What if you could chat with that? And convert designs into code in that context? Um, and then as we were working on that in a sites context, we're like, you know, this is really good. And there's a lot of cases where you don't want to just do this on one part of a site. You'd like to do it on the whole site. And that became Figma make. And so we sort of pulled out that part of sites, turned it into its own form factor and shape. And it really was able to benefit from a lot of long infrastructure investments on sites, on code layers, on you know, agentic interaction with code, but in a very different form factor. And so that was the first bucket is like there's a lot of underlying infrastructure that this is built on top of. I think the second challenging area is quality, speed, and cost. And like the model quality is a really hard hill climbing grind. And there's a lot of interesting trade-offs, business trade-offs and product experience trade-offs on like where do you want to be in the quality, speed, cost uh world. And how do you move in a direction that you know meets user intent expectations in the right way. Um, and like doing good evals, hill climbing well, working through all of those details, having an experiment log, like all that is really detailed work that takes time and a lot of nuance. And then I'd say the third challenge has been just the level of non-determinism in AI. And so, you know, we had interesting cases where you'd give a design, an image of a design to the agent and um ask it to do something and it would sometimes do a pretty good job and it sometimes fail spectacularly and it was hard to figure out why. And like we had one of these cases and it turned out the image format we were passing to the model was wrong and the model just couldn't read the image at all. And we were like shocked that when it literally couldn't read the image, bunch of the time just based on the prompt, like it did a pretty good job guessing it, but sometimes it'd spit out something like completely unrelated. And it was like, how could we know where the problem was? You know, how do you diagnose that? And so being able to have full visibility into all the steps of like, oh, we accidentally truncated message history and so it doesn't know about something earlier when you thought it would. Like there's so many bugs and edge cases where it's like really hard to tell like wait why is it now not quite right. Yeah. That that's why I think it's also like you try to keep the system as simple as possible so that if something breaks you can like you know either fix the prompt or email or like otherwise you have too many steps because it's very hard to find out what what's going on. Yeah. For sure. and uh you know so like you know people keep talking about like you know all the PM's got to become AI power now and so on and so forth and uh I think probably your team is most giving your building AI product is the most AI powered team so like how do you guys actually build this product like it's not I'm assuming it's not like a waterfall development pro process yeah I think you know you and I have chatted about this in the past but I think like a prototype is worth a thousand bucks Um, and the great news
Why prototypes are now the gold standard for design
for industry is like the cost of making prototypes is going down so much. Like the speed, the expertise, the team size. And so, you know, it's been fun working on a prototyping tool in Figma make to be dog fooding it itself. Like there's been a whole bunch of ideas for Figma make that we've just PMs and designers have gone ahead and prototyped in Figma Make with no engineer in the loop and like gotten a sense of how it would play and feel. Uh, it's like, "Oh, wow. Yeah, that's like really helpful. " Um, yeah, I think you know at the design level I'd say prototypes have been a big thing for a while. Like Figma's had a concept of noodles and being able to link things, but it is more timeconuming. It's not quite as high fidelity. Um, and I think we've seen, you know, in the last year prototypes move from like a very helpful thing in a lot of projects to like kind of the new gold standard as the design artifact of choice. And I think in the next year we're going to see that change to being kind of the new gold standard for the PM artifact of choice. And like, you know, your PRD is a good starting point, but why didn't you just put your whole PRD in a prompt box and get out something you could play with that made you realize a bunch of your PRD actually would feel kind of awkward and not right? Um, and actually give a directional sense to a designer and engineer of like this is kind of the shape of the thing and like these parts we know are problem. I need your help figuring out what we should do there and like these parts definitely won't scale and like how would we actually build it? But getting further into the problem, questions, I think that's a huge industry trend and is really exciting and also personally empowering for as a product person myself. Um, and then I'd say the other reason for AI teams in particular prototypes are so helpful is you don't know if it's going to work until you kind of get it hooked up. And I think it's like you can do model you can do eval isolation but you also don't really understand like what the expectations are in a flow until you feel it or users can feel it in some way. And so u you know I talk sometimes to my team about a maze the maze of choice in AI and like you know you have an idea for a feature and it might be you can go straight through the maze and ship and like the whole thing just works. Um and like the model is supported. Yeah. maybe 1% of time and the prototype will prove that. But it also might be the case that like you can't even get into the maze and like it's too slow, it's too expensive, it just is like it's too hard a problem. Your idea won't work on today's models and you got to like leave it on the shelf and come back to it. And then there's also cases where like you get into the maze and it sort of works in some ways and not in others. And this is a fairly common case and you have a choice to make on like do I go out of the maze and wait? Do I try to add product constraints or narrow the scope and make it easier so it can work? Or do I take out a sledge sledgehammer in the middle of the maze and try to like bash through these walls and like fine-tune and build custom models and just like hard solve the hard data problem and like make quality get to a point where it's actually good. And I think a lot of the art of like being a modernday AIPM is figuring out when to take what turns in mates and is like is you know is this actually solvable like should we actually change the constraints in some way that simplify the product deliver most of the value to users? Yeah, you have to be comfortable with like getting all the way to like dog food stage and then like the model doesn't work properly. So you have to like hold off on the launch or like that feature for like a quarter or something until like a new model comes out. fully. And so like how do how can you do that before you staff engineers on the project like how do you do that before you make it a full thing is like you move prototyping earlier you get feedback from users on like what is the quality bar in this experience like maybe it works out of the gate maybe it doesn't um and maybe that frames you know what's left what the big hard parts of the project are. So like uh let's say you want to add a new feature to make like do designers just prototype two or they go to think about design first that that's the key question. Which one do they go to? Yeah, I I've seen all of the above. I think it sort of depends. Um you know we've definitely had cases where like you can starting with a good existing make you know you can kind of prompt your way to like feeling out how a few changes would feel and be like okay yeah like this would feel good. Um, you know, there's also cases where it's like it's hard to explain and like, you know, it's f it's funny to your point earlier in this conversation like sending a one-s sentence description of an ambiguous thing to an AI is kind of incredible that it can come up with anything good. Like imagine sending a software engineer a Slack DM being like, hey, build me this thing. They'd be like, what are you talking about? Like why are we doing this? Like what's the goal? Like what are the constraints? Like you know, they'd be asked for a whole bunch of thoughts on like what are we actually trying to do here? Um, and I think it is really powerful that you can like quickly get to a directional starting point, but also like specs exist for a reason, like mocks exist for a reason. Being able to be really specific is what Figma is great at. And so being able to go and say like, "No, this is exactly how I want it to look and like ladder that into this framework. Let me try it there. " Is also really powerful. Uh, yeah. I I think I mean I imagine like you guys just probably just like have meetings, have big jams and look at the makes and talk about them instead of like looking at a PR or email, you know, like it just makes the things more fun, right? Like who wants to look at a PRD? No one wants to read my PRDs. Yeah. So looking at the look at something as a user from a user facing point of view is like way more fun and also probably better than looking at a doc, you know. Yeah, for sure. And I think it also, you know, it's a bunch of uh it forces more product thinking too where it's like, you know, how many PMs have been in a situation where um you just didn't think about like a bunch of edge cases or other scenarios until like an staffed and engineer is working on it and they're like, "But how should this work on mobile? " You're like, "Oh I forgot to think about mobile. " And it's like, you know, you get to a prototype like you kind of see the things that you weren't thinking about that like should have you should have been thinking about a little bit earlier. Yeah. There's no such thing as like coming up like I feel like very rarely I have a perfect PRD or even a perfect design like in one go. You have to play with it and like make chain changes. So it's Yeah. Um Oh, sorry. Go for it. No, go ahead. Yeah, I was going to say, yeah, I was curious like you know, you've been playing with the product a bit like what uh what your reactions were to it, like what you found it useful for or less useful for. Um I was really so I think um I've been playing with all the prototyping tools right and um what I was really impressed by make is it did take a little bit longer than some other tools to generate the prototype but the prototype was like much more beautiful and featurerich than I expected. So like once it generates it was a very is a very positive magic moment. Yeah. Um, and I haven't uh, you know, because we're talking about right now before the launch, so I haven't actually shared my the designer and other people, but I think the collaboration features that you show me are like pretty amazing. Uh, like I joke that like um, you know, when my designer shares design with me, like I'm like, "Hey, what about this variation? What about that variation? " And then now instead of having to ask her to make a bunch of designs around this stuff, I can just make it right for better or for worse. and then show it to her and then we can show some you users, we can have a debate and then we can figure out what to do. So it just speeds up the iteration flow a lot faster. Yeah, that totally resonates. Yeah, I mean like I joke that uh I'm like a IC 0. 1 designer now, you know. Yeah. So but um let let's kind of wrap up by talking about Figma's journey into AI, right? like um uh so let's go back to like a year I I think you joined Figma like two years ago or something right about a year ago actually about okay so right okay so let's go back to a year
ago I think in the previous config uh Figma launched uh some AI tools and then one of the tools got like a mixed reaction which is like this like text to the design tool right so I guess like what did you learn from that experience that you kind of apply to building make yeah for sure yeah we launched a feature called make design. Uh and it had a pretty mixed reaction for a bunch of reasons. Uh you know I think for me there were two big learnings. One is be clear on how your system works. Um and I think for make designs we also at the same time released a new data training policy where we could train on user data with their consent. We're very transparent about it. Gave users lots of control and I think people got a little bit confused sometimes and thought like we had actually trained on their data already. uh because we didn't say how the feature worked. We didn't say like what model was powering it or anything. And so we've been much more direct uh with make like you'll see powered by claude 3. 7 sonnet in the prompt box. And we think it's totally understandable to want to know how that model was trained and what the biases are in it. And I think you know anthropics done a fantastic job publishing a lot of detail on those types of things that customers really want to know. I think another big learning was um you know we uh being clear on what something is for. And so with make designs uh some designers perceived it uh that our intent was to replace their job. And I think even the name of it make designs is like we will make designs instead of a designer making designs. And it was funny because our internal name for it was first draft which we ended up uh calling it later publicly which was very much the intent is just like save you some time in getting to the starting line. But I think this was also a message that was confusing in the market and people didn't quite get it was like came across as a we're going to do everything designers do. It was like no that's like we love designers. We're a tool for designers. We want to save designers time. We want to give you superpowers. We want to empower you. Um, and so I think with this we've really focused on prototyping and starting with marks like extending what you're able to do beyond your current set. Um, and trying to be really clear in that in our message to designers. It's like we love you. We are you. We want to give you more powers and make you know able to take your vision much further and not be limited by your current tools. I love that. Yeah. I think even with the prototyping stuff, right? Like uh this stuff looks beautiful, but like is my is my engineering manager let me check the stuff into code? I don't know like it's more to like uh just get feedback from users, right? And then maybe you can build a proper afterwards. Yeah, for sure. Yeah, it's like a co-pilot more than just like, you know. Yeah. Cool. Um, and uh, any kind of closing words of advice now that you've been on this journey? I'm sure you feel very proud for team yourself and uh, you know, I I'm very genuine when I say it's a great product. It's awesome. Um, what are some uh, closing words of advice for people trying to build similar things? Yeah. Yeah. um words of advice on people trying to build other AI products, other AI products or like trying to get into the stuff like you know like you're going to have a lot of patience right this or some closing words of yeah I mean it's an incredible time to work in software and to be a maker and I think you know as a company our mission is to make it much easier to go from idea to product um and I think it's been personally inspiring to see just how wide a range of things people are making in Figma make as well as in the market that people are making with new AI tools. Um, and as an example, like one of the alpha testers for Figma make is my six-year-old son and like he started playing iPad games on the weekend and so he started VIP coding with me and he's made like 10 games in Figma make and he's like a six-year-old. I'm like, "Oh my god. " Like I didn't get to playing video games till I was 12 and making them till I was like 18. like it's such a different generation now and it's so perspective changing for him too. It's like he comes home from school with sketches of ideas and he's like can I give this picture to make and start making the game. Yeah. Um and so I think my biggest advice to makers in general both AI teams professionally hobbyists students is like just start making like you're going to learn by doing. you're going to build empathy. You are, you know, build for yourself and your friends like you are the user. You have empathy. It's easy to understand if it's like working or not. Start there. Um, and get to something that's playable and like follow that direction. Yeah. And I think one of the most exciting parts as a builder is putting your product in the market and like you know make has to be best-in-class at Figma to prototype, right? which we talked about, but like it'll be interesting to see how people are using this stuff completely outside of that use case like your your kid making games. Who knows, man. Maybe that'll be the primary use case like Yeah, if we have a minute actually I collected a few of uh of my favorite examples that I've seen people make. Fun to run through. Yeah. Um, so yeah, I very much to your point, you know, we optimize for this in designers, professional designers starting from
designs, but it has been amazing to see just how many different personas and scenarios this works for. Um, and so here was a case where someone made like a very simple spray painting tool in like five minutes. Um, and it's like, you know, imagine making software to make software. Uh, you know, making a creative tool yourself. Uh, it's pretty incredible. Um, another one, this is one of my teammates actually, um, is a gamer and made, you know, a whole 3D first-person shooter. Let me refresh it real quick. The Arcane Realm Awakes. Um, oh man. Wow. This is like a Wow, 3D first-person shooter. Yeah. Yeah, this is like a full-on 3D first-person shooter. And like we did not optimize for this use. go out of our way to make like game creation and 3D awesome, but it's like pretty amazing how much you can get. Uh similarly like internal tools like we've seen a bunch of really cool cases like teams building out their specific workflows for their marketing idea campaigns. Uh personal journaling tools that have it like configured exactly the way you want. Um some artists just like playing with vector fields and graphics and like you know what is it what is this vibe like? Um people trying to figure out you know they're using public APIs to figure out their uh next train near them. uh in a form factor that they like most. Um and then yeah, actually this was my son's uh my son's game was like feeding cats brunch. Nice. Try to keep them built. Yeah. There you go. Yeah. Wow. This is great stuff, man. This is awesome. It must bring so much joy as the PM on this to see all the different you use cases. Yeah. I'm standing on the shoulder of giants with an awesome team I feel incredibly privileged to work with you know across Figma itself leveraging infrastructure from Figma to sites to code layers and then incredible group of makers who've been extremely passionate and hardworking and um you know very creative on this project. So uh where can people find this feature? Let's travel forward in time this thing's launch. Where can people find this feature? Yeah. So, uh, Figma make is rolling out over the coming weeks to paying customers, and you'll find it as a new file type in your Figma, uh, file browser. Awesome, David. Well, thanks so much for your time, man. Let's all go out there and make stuff thanks to your product. So, very exciting. Thank you, Peter.