A Founder's Playbook for Shipping 10x Faster with AI | Yana Welinder
41:53

A Founder's Playbook for Shipping 10x Faster with AI | Yana Welinder

Peter Yang 14.12.2025 3 774 просмотров 74 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Yana is Head of AI at Amplitude and a good friend. We had some real talk about how to stay scrappy inside a big company, including how to avoid decision by committee and endless internal debates. Yana also demoed her favorite AI workflows to triage her emails and aggregate customer feedback. I think you’ll love our banter 😅 Yana and I talked about: (00:00) "By the time you've debated 2 ideas, I've already shipped 10" (03:21) Think "how would I run this company?" not "what's my job?" (05:22) Why we banned decisions by committee (08:25) Cross-functional now means doing the work, not coordinating it (11:22) Why frustrated users can become your biggest advocates (15:19) Live demo: Using ChatGPT Atlas to triage emails and defuse emotional ones (24:38) AI is still terrible at prompting (and what you can do about it) (28:48) Live demo: Combining qualitative feedback with quantitative data (37:38) Why AI analytics is lagging (and what needs to change) Thanks to our sponsors: Optimizely: The agent orchestration platform for marketing https://www.optimizely.com/ai/ Get the takeaways: https://creatoreconomy.so/p/a-founders-playbook-for-shipping-10x-faster-with-ai-yana-welinder Where to find Yana: X: https://x.com/yanatweets Website: https://amplitude.com/ai-feedback?utm_campaign=fy25q4-ai-feedback-launch&utm_source=newsletter&utm_medium=organic-social&utm_content=creator-economy 📌 Subscribe to this channel – more interviews coming soon!

Оглавление (9 сегментов)

  1. 0:00 "By the time you've debated 2 ideas, I've already shipped 10" 706 сл.
  2. 3:21 Think "how would I run this company?" not "what's my job?" 434 сл.
  3. 5:22 Why we banned decisions by committee 646 сл.
  4. 8:25 Cross-functional now means doing the work, not coordinating it 671 сл.
  5. 11:22 Why frustrated users can become your biggest advocates 852 сл.
  6. 15:19 Live demo: Using ChatGPT Atlas to triage emails and defuse emotional ones 1959 сл.
  7. 24:38 AI is still terrible at prompting (and what you can do about it) 849 сл.
  8. 28:48 Live demo: Combining qualitative feedback with quantitative data 1706 сл.
  9. 37:38 Why AI analytics is lagging (and what needs to change) 863 сл.
0:00

"By the time you've debated 2 ideas, I've already shipped 10"

There's this fear that, oh, if I ship something that's like too rough, people are just going to be like make up their mind and then not want to use the product later when it got better. The only way that would happen is if you're super slow. Whereas, if you really follow up with the user immediately and they're excited to try it again, then you don't really have that problem. — Anything that kind of trades off speed needs to be really carefully thought through. — By the time folks are done debating two good options, I will probably have execute on 10. Do I have any emails I need to respond to today? and it will like review my mailbox and identify anything that's like urgent to respond to today. I've just gotten like 100% more effective in my work because I now have this. — Okay, welcome everyone. I'm really excited to have back on the podcast Jana, my friend and now head of AI at Amplitude. You know, I think Jana is like top 1% chat GP power user. So really excited to talk to her about how she uses AI for work and finally what product leaders can learn from founders since she's worn both hats. So welcome Jana. — Thanks so much for having me. Super excited to chat. — Yeah, why don't we start with the uh most recent topic. You know over the weekend I tweeted something about like how cursor scale to billion uh valuation without any PMs and then it became like a big viral post about how like PMs are not valuable anymore. So yeah, so you've been both a founder and now a product leader at a large company. I guess what is different about these two roles and what can product leaders learn more from founders? — As a product leader generally, which is kind of what I was before starting Grapple, you do have to kind of hone in on your product sense and have good product strategy and manage PMs and all that stuff. But it's sort of like a much more isolated role I would say in sort of the organization. — Mh. Whereas as a founder obviously it's your baby, right? Like Franchesci talks about we're like the biological parents of the company. So it's just a completely different role where you're you have intuition around what's going to be good for the company in a completely different way. Not just from a product perspective, just generally in every possible way. And I think coming in now into another organization where I am not the biological parent of this company yet I kind of am brought in to still be as much of a founder as I can be. That's and I think a lot of like whenever like larger companies acquire startups they do it and in our case we were acquired for the product we're also acquired for the team and so in some sense my task is to try to kind of like maintain as much of my like founder perspective as possible and I'll say you know after being a founder it's very easy to do that it's actually really hard to try to fit into an organization and have any other kind of role and so you kind of come in and you're sort of — you can't help thinking how you know how would I want this company to run like if this was my company how would I want it to run and then like just trying to act on that. — Okay. So, so like there's a lot of talk about how uh as IC's they have to, you know, IC have to wear multiple hats. You can't just be a designer or PM anymore. You got to learn how to prototype. do a little bit of everything. So, I guess even at the product leader or exec level, you should also — well probably even more, right? You should probably be aware of what's going on with marketing and engineering and everything else. — Absolutely. I think I think that's actually like the interesting thing about product is that
3:21

Think "how would I run this company?" not "what's my job?"

at any stage in product whether you're a product leader or you're like a product I see a PM really you have a super cross functional role even though like as PMs we often at least historically have been very like here's what defines my role at my organization because like PM at every organization is different now absolutely you kind of need to be super crossunctional like not crossunctional makes it sound like you're like coordinating with other people whereas is actually [snorts] you do need to do you need to do that work yourself like you have to design you have to user research you have to try to write code as much as possible particularly now that there's so many tools where you can ship code so I think that it's become pretty inevitable to be all the things um and I think this is a bad thing I think that's actually like it gets everyone moving much faster and it make sure that people are like feel much more ownership because you're never sort of like this is my role and everything else is like someone else will do that. Yeah, that is the key because I was going to follow up with you on this crossf functional thing, right? Because uh you can like a lot of PMs at larger companies feel like kind of like a glorified cross functional secretary or you know kind of like trying to align like 10 different stakeholders trying to align the leadership and then they have all these internal debates and like document writing, right? And like trying to get everyone to agree on the same decision on a path forward. — Yeah. — I don't think that's what you're talking about, right? Or like — No. Yeah. — Yeah. Yes. Absolutely not. I'm definitely not talking about that. I mean, I think I've forgotten just how much of that there is and now being back at a at a bigger company, I sort of realized I am, you know, I'm most certainly not the debate girl and I'm not going to be writing any like docs uh to coordinate stakeholders. Uh and I think that, you know, by the time folks are done debating two good options, I will probably have executed on 10. And I think that that's how founders move. every PM should really do that given that's kind of how fast AI moves. — But how do you get the freedom or the agency to just like ship stuff? — Uh you know actually now at Amplitude we
5:22

Why we banned decisions by committee

or Pens has recently banned decisions by committee and so specifically to enable PMs and engineers to be kind of owners and be able to move fast ship fast. we've had we've been able to ship particularly in with the AI products but really um any product have been able to move much faster as a result of that and I think that it's important that leaders enable their teams to move fast and have that ownership and yeah it's kind of hard to do as like at the IC level to say actually my company's now going to be AI native — and I'm gonna be AI native and I'm not going to talk to anyone I'm just going to go and ship I think that is of course incredibly difficult. You kind of need to get support to be able to do that. So it does need to come from the top and I think probably if it's not coming from like the founder CEO, it's definitely something that product leaders need to advocate for so that their like teams can ship at pace that's like relevant today and not kind of like become outdated. — What led him to ban this uh decision by committee? Was this move like moving too slow or like — Yeah. No, I mean definitely uh it definitely was a desire to be AI native and that's kind of why like why he acquired essentially multiple startups at the same time. There was sort of the idea was to bring in more AI talent and folks with experience moving at that speed and then get the whole organization to move the same. Um so sort of a very kind of big strategic change that was kind of just one piece of the bigger puzzle. — You know Amplitude is like a very successful company, right? got is worth billions of dollars. But I bet that Spence probably looks fondly back on the early days where he can just like, you know, ship stuff and move fast and he probably just wants to get this company back to that state. That's probably what he wants. — I think that's part of the story. I think the other piece is that he really he's very cognizant of how the world is changing and what companies are sort of doing well not. And there are bigger companies including public companies that are adopting a AI native approach and are really kind of changing how their teams move — and ship. And it's just it looks different from kind of what public companies were doing like two years ago. Um and I think it's been very kind of strategic and in making sure that Amplitude is in that boat as opposed to kind of the companies that we'll need to kind of be catching now. — I mean keep saying AI native but it's really like a culture thing, right? It's like I think this new breed of companies like cursor and um you know openai and anthropic and all these other companies are basically trying to keep headcount as low as possible and just try to like empower everyone and to just work with AI and just figure stuff out and ship because if if you don't ship fast into companies then you're the company's going to die. — Exactly. — So you kind have to do it. Yeah. — There's no guarantees right now, right? Like you kind of have to you have to move as fast as AI startups are moving because nothing is taken for granted. Got it. So, so then uh you know, you probably have a bunch of PMs or other folks reporting to you. How do you encourage them to move fast and maybe make some mistakes and you know, not feel bad about it?
8:25

Cross-functional now means doing the work, not coordinating it

— I think the best way to encourage folks to move fast is sort of like lead by example and show, okay, here's a decision we need to make. How would we make this decision in a startup versus how would you big company? If I can kind of primarily show what how would I approach this? I think I' I've done a lot of that recently in terms of just like okay we have this launch. how would I do this launch? And I'm just I'm going to do this launch the way I would do this launch so that other people can see that it can be done in a certain way and it's going to be faster and better and uh and it will leverage AI in like 90% more places than uh than you would in a bigger company and folks are getting to see that and they're so like okay I can just next time we're doing some other launch we can do that too and um you know so I think it's a lot of like killing instead of trying to like win an argument because I think that like otherwise it ends up being the default is to debate things, right? And there's no time to debate things. So, the only way you can you do it is to show that something works and um and then get people excited about doing it too. — Yeah, that's a good point. And like I also feel like there's should have like higher tolerance for fail failures, right? Because um if you move fast like some things are not going to work out or maybe there's some mistakes and then like and then people who are like used to process are like you know, hey, I told you so let's add like five more steps to this process. But like that that's like a I I think anything that kind of trades off speed needs to be really carefully thought through basically you know — I completely agree you know I think that one thing that is really helpful to show particularly around failure is that often times when you fail it's an opportunity to do even better particularly when you fail with customer scam you are customers and users and you really act fast on the failure. So, so, so one thing is sort of like if you ship something and there's some kind of like rough edges that you can work out with the users quickly as they're reporting it. Uh, the reaction of a user who had some initial frustration, reported it to you, and you fixed it within like 15 minutes is going to be so much better than a user who just was like me happy, you know, it was shipped the first time. like the impression that oh I I came in I said I needed this to be better and the company actually went in and did that and they followed up with me. Those users tend to be like your strongest allies. So I think that that's another way of showing like actually feeling a climb as long as you constantly are readjusting and acting fast. — Yeah. Like if you're co-creating the stuff with your users and you're listening to them and fixing all their bugs and feedback like yeah they're probably even a stronger allies than if you just got a perfect product right off the bat. So — yeah. Yes. Yeah. I don't know why more companies don't do this like just like you know just share all the stuff with users along the way and like you know you can like ship to more concentric circles as a product gets better. I mean even like OpenAI devices right like they're like super successful and they like they launch features pretty they're pretty MVP stuff that they launch and then they iterate and get better along the way. — Yeah. And I think that the
11:22

Why frustrated users can become your biggest advocates

thing is there's this fear that oh if I ship something that's like too rough and ready that people are just going to be like make up their mind and then not want to use the product later when it got better. And I think the key is to make sure that's you don't lose their interest along the way. Like the only way that would happen is if you're super slow whereas if you really follow up with the user immediately and make sure that they get an update and then they are excited to try it again then you don't really have that problem. And that I think that's what Openai has done well, right? Like they ship things and then if there's like some things that doesn't work the way you want the fast follows happen like the next day, right? Or even the same day. And that's really important and they're really like vocal about it and make sure everyone knows and um things like that. I think that that's not how uh companies were used to adjusting for failure. And so I think the expectation is just different. — Yeah, that's a good point. It's like you know if you sell me something and it's like crappy and I'm like hey y this sucks. Yeah. — But then but then you actually listen to me and I make it better. Then I'm like, "Oh, okay. She actually cares. " So then I I'll stick around for a little bit longer. — Yeah. In particular, if I move fast, then you understand that the reason sucked a little bit to begin with was because I was moving fast. And you sort of started appreciating that we were like, "Okay, this is cool. It's like moving very fast. " Like imagine how good it's going to be in a year if it keeps moving like this, right? It just sets a very different tone and expectation around everything versus like slowm moving things that are just like polishing things to perfection. By the time it's done polishing, it's like not even the thing you wanted. — You ship something and then people are reacting to it and like you just like you wait for like a press release or something. It's like — Yeah. Yeah. — Yeah. It's you got to be active. I think the issue is that if you're like optimizing for avoiding failure at any cost, you really are optimizing for avoiding any kind of discoverability or adoption because ultimately it ends up being like if you know you polished for too long and you're too careful and ultimately that's not how you win like it's really hard to win like that. — Yeah. I mean I think in defense of like polishing stuff like you know like I feel like uh you can roll out to like 10 users like a really crappy product and give their feedback and you polish a little bit more and roll it to 100 and slowly get it more and more. Uh like you don't have to roll out like a super crappy product to like all the users right away. So that's part of No, no. Yeah. Sorry. I I should have been clear. I think that you should definitely have the product be as polished as you can at the speed that you want to be moving at, but you can't obsess over it being like absolutely perfect like until you like roll it out to any users. Like that's the problem. — I think there's a difference between polishing with like real customers, even like 10 customers versus polishing internally through like debates and like trying to like you know just trying to get everyone aligned and then you have like a super compromised crappy product. — Yes. Exactly. — Yeah. And I think that that's such a great point because there is this feeling internally that oh we're making progress. We're aligning all of these people like — but really it's just a bunch of people who love being right and so they're all trying to be right and then they get like [snorts] some amount of like — I don't know satisfaction over how much they were right in that conversation but really there's like actually zero progress. — Yeah. Because you never know. You're not the customer. So — yeah. This episode is brought to you by Optimize the problem in marketing usually isn't a lack of ideas, it's a lack of time. If your to-do list keeps getting away from you, then take a look at Optimize the Oppo, an AI agent platform built specifically for marketing. With OPPO, you can use AI agents for SEO and geo recommendations, AB testing, website analysis, and much more. OPPO knows your brand inside and out, and plugs into your existing tools and data systems so that you can save time on labor intensive feature testing, reporting, and more. see what it can take off your plate at optimizely. com/ai. Now back to our episode.
15:19

Live demo: Using ChatGPT Atlas to triage emails and defuse emotional ones

— You're open AI and chat power user. So let's talk about how you use uh this stuff to kind of like you know do your job or like you know — I do a lot of AI prototyping which isn't particularly new and and lots of folks do that. I will say that probably what may be a little bit unique in how I do it is that usually when I'm doing an in the eye prototype other than when I'm doing it just for myself to see what it would look like I'm doing it to show it to customers and I will oftentimes will have a prototype that I'll u jump on a customer call get lots of feedback from that call and then immediately I usually will take that feedback and iterate on the prototype and incorporate all the feedback before my next call and then I'm on the next customer call now they're looking at a completely different prototype that's been adjusted based on the first customer and then I keep like having those sort of like iteration cycles and I feel like that's something that was you know completely impossible to do before. You would always have just the one prototype and then you would show that prototype to like 10 customers and then you would never know that if the stuff that the c the first customer told you is it actually resonating with other people or like now you kind of have to like reconcile all the feedback you've been getting. it's really hard to know is it actually like impactful. I think now you can move much faster with like that those are like iteration cycles. So that's been a really good one for me. Another has been to use Sora a lot in like marketing collateral. We just did the launch of EI feedback. We ended up using a lot of Sora videos and also not Sora videos. We had actual like full on like recorded demos of me showing the product which you know I've done a lot in the past but and have always had that in launches and then we had just like really fun sort of videos and it was really interesting to see the engagement on social and how it was different for those two types of formats because for any kind of videos on social I don't know like I'm sure you you've looked at this too for when you have a video snippet usually by the end of the video when you look at the analytic or for the video very few people have watched it through the end. It's like the drop off is kind of crazy on Twitter but for sort the sort of videos was actually sort of like 50% sometimes above 50% were watching the whole video through the end which was incredible to see. I've never seen data like that before. — I don't know how I actually had someone comment on Twitter and was like this is incredible post. I like I I came for the video I and then stayed for the copy and I was like what was it about the video because I was sort of like at right at that time I was looking at the analytics and I was like what that's really fascinating like I'm sort of like so curious why is this different and he was like well you know I kept watching the video and I was like is this AI is it not AI definitely AI actually maybe not AI he kept having like that debate in his head of like am I what am I looking what am I watching um and I just thought it was really interesting it's just like completely different way of engaging being hooks. — What kind of sort of videos do you make for the product? Like it's not a product demo, right? It's like people using the product or — No, they're just kind of like funny videos like about related topics. And so this particular one was uh a video of me at the cemetery and [snorts] like leaving flowers on a grave for NPS. And so it was like a stone that said like um red NPS uh survived by real user feedback or something like that. And um and so it like looks very realistic uh of course because of the Sora video. And then I had the whole copy on like how like MPS never worked and why uh you know user feedback is a much better way of — Oh, that's learning. Yeah. — Yeah. Maybe I should make some sor videos for some meme memes like that. That's good. Some PM memes. Yeah. — So that's that was a really good one. And then the other thing I like the thing I use every day is um I use the chat atlas uh browser. And so that I use for like writing everything, summarizing, you know, interacting with my mailbox. I use agent mode quite a bit. So just like lots and lots of different use cases where I feel like I've just gotten like 100% more effective in my work because I now have this. — Maybe you can show some of that because uh when I try Okay, to be fair, I only tried Alice for like 10 minutes, but but when I tried it, I was like, what? This is just like you know prompting chatgbt in the browser URL but like maybe you can show me some like cool use cases. — So one kind of thing is like I may just kind of just like do something as simple as like do I have any emails I need to respond to today. — Mhm. — And it will like review my mailbox and identify anything that's like urgent to respond to today. So yes it like triages it pulls out what needs to be responded to. nice to have uh just like FYI emails that aren't really necessary. And then so then I can kind of just like go to one of my emails and let's pull up something that's like clearly not necessarily something I would want to actually respond to, but let's do it just for the sake of it. Um and respond to this email. It politely rejects. — Okay, I see. — Awesome. No m dashes. — Yeah, maybe. Yeah. Got it. — Yeah. And so I think you know like this um this is great. So like we'll write my email. I could do that and I can probably like I can also just like do this with agent mode and do this at scale with a bunch of emails and just have it go through and write emails that are just like responses to people and then I can go through later and review them and send them off. But what I found this to be really helpful is if I have emails that are like, you know, like are super emotional, like things I don't actually want to read, that's going to be just like annoying. Some someone got too emotional for some reason that's like not appropriate. Um, then I can just like have chat GPT summarize the email in like three bullets so I know the substance of it. I don't actually have to read someone like overreacting on something. And then I can like respond to the content and ask and then and then ask chat if you need to politely respond. And then it will usually include things like um you know I hear you or like you know I really appreciate your perspective. Like it will do the things that you need to do without like the emotional overload which is really great. — Interesting. You get those kind of emails. Is it like when you move too fast and ship something and somebody's like what? You didn't check this box. Is that is that what happens? Uh we had a lot of customers that really wanted craft to come out as quickly as possible. So that was that was a good use case where it was just like I just cannot I can't have everyone's emotions right now. I'm just like trying to ship this product as quickly as possible but I do want to engage with everyone and feel like I'm really you know. — Got it. — So that's a good example. — Got it. Okay. That makes sense. Okay. Uh okay. Got it. And like you can probably uh I think you also use like agent mode to like unsubscribe to spam or stuff like that. I don't ever try that stuff. Yeah. — Yeah. Exactly. You can do that. And I think like the one thing that is like one of my favorite things is to interact with things in line. So once you have written something you can be like is this clear and well written? — If I actually did want to write something myself which I sometimes do. Uh then we can do that and we can kind of just like um you know update or replace this copy with whatever it does. Yeah. — There's a lot of different ways. — It it's funny whenever I get my guests to do this demos like they always try to fix their spell spelling mistakes but I don't think chat cares about your spelling mistakes. It doesn't matter you know that they understands like all the spelling. — That's true. Yes. Yeah. Yeah. And the thing is, I bet this is true for all of your guests because uh it's certainly true for me. Folks who are just like use AI a lot have a lot of spelling mistakes because the more you use AI, the less good you become at spelling. — Yeah. — And so I don't normally like correct my spelling when I'm chatting with Chachi Pat, but on a podcast I feel like I actually have to like correct myself. — Yeah. I feel like I kind of worry that my like um because I I think I'm a pretty good writer overall, but I've got like a super lazy. I just like dictate to AI and like and I'm like hey you know can you cut this copy by 20%. Or like and if you ask me to write something from scratch again like I don't know if I can do it or not. — I absolutely I think you know I use the other piece I use a lot is um I use the voice mode here a lot because and I just dictate things. I have a eight-month-old. So whenever I'm like with her um this is just such a handy thing like you can get through so many things that I wouldn't be able to because now I'm like handsree and but yeah absolutely right. I also as a result I cannot type and I cannot spell — there's so many things that like — I'm outsourcing all of that to AI. — The only thing I can do is like uh order my little chaty between turn around like hey do this and do that. — Yes. Which is — Yeah. Yeah. It was great. That's great. Yeah. Okay, let's talk about this. You know, we all love AI, but like what is something that you still do manually like that AI is not good at yet? — Yeah, great question. I think that um one thing uh that to me I I've seen this show up in a few
24:38

AI is still terrible at prompting (and what you can do about it)

this I've seen this show up in a few different ways, but um prompt engineering and writing evals is not something that AI can do well at all yet. — And I thought that was it was kind of interesting. There's some controversy around this because which I didn't think so because I every time a new model comes out I try to have it do all the things including prompt engineer and I always sort of like okay not this one not this one you know maybe next one. Um, and then when uh GPT5 came out and I was reading the OpenAI prompt engineering guide for GPT5, they do have a suggestion in there that you should use GPT5 to like refine your prompts. — And um or maybe that was in the eval guide. It was in one of the guides. Um and so I ended up trying that again and I found that actually the prompt I got was workable. like it was like finally it could do some things like it could actually write prompts but it was sort of still like a step worse than the human written prompts. So I would have this prompt that I used specifically for one step in our analysis pipeline where we use GPT4. 1 um like a non-reasoning model for speed um because we just can't have it go and start reasoning and um in that prompt I could get something that worked as well as that prompt but only with GPT5. So GPT5 could write a good enough prompt that could work as well as a human written prompt for GPT4. 1 but only with GPT5. So still wor clearly worse but it was getting there like it was getting closer to it and you know one thing I've sort of like learned that there folks use AI a lot for writing prompts which is not a great idea but I think particularly engineers tend to do it because they use cursor for a lot of things and so as a result they also use cursor to write prompts and oftentimes end up using actually much worse models in GPT5 or writing prompts um and they absolutely cannot do it like it's not a good idea And when you say writing prompts, you mean like uh like writing from scratch or like editing a prompt that you have? — Both. Can't do either. — Interesting. — Well, yes. — Like for my little creator prompts, sometimes I have a back and forth dialogue with it and then like make a bunch of changes and then at the end I'll be like, "Hey, can you go update the original prompt to include all this feedback? " — Yeah. — And then it does do that. I I do have to manually review it to make sure it's actually good. — Yes. But I guess that kind of saves me some time, you know. — Yes. Well, yeah. So, I do that too. And what I found is like usually um then when I write when I when I then run evals on like different components of it, it's kind of like maybe like 10% of the improvements worked and they still had to be tweaked by human. — Um and 90% made it actually worse. So you kind of you can't just take like the AI generated improvement to the prompt and just run with it. I would say you know like there's like a lot of work that massaging but sometimes yeah it could propose something that could actually work but you still need to do lots of work to it to make it actually perform better. — Interesting. Okay. Yeah. I I guess it is kind of like an intern. You got to supervise it. what it does. — Yes. That's right. — Yeah. — It's getting there. Eventually, we will be like the interns for it, you know, but — Yeah. — But we're not there yet. — I feel like I'm going to be a much lazier intern than AI, so I don't know how it's going to work out. — Me, too. I will be fired immediately. — Yeah. Okay. Uh Okay, cool. Well, let's talk about Amplitude now, actually. Yeah. So, so you just shipped a big update. You know, one thing I feel about uh Amplitude and any kind of data product is that um you can't just have the data like just having the quantitive stuff doesn't give you the full picture. Like you got to have the user feedback and the qualitative stuff. It's kind of like what Jeff Bezos said, right? Like if the data doesn't match the anides, like you should trust the anides. — Yes, I can. — So I'm I'm really glad that you actually shipped the quality stuff as part of M2. I think that's a huge gap. So do you want to actually show this product that they love? — Yeah, absolutely. And I completely agree. Obviously I will agree because
28:48

Live demo: Combining qualitative feedback with quantitative data

you know that was my baby. So what I ended up shipping was is like we we've been integrating craft into amplitude and so uh amplitude feedback is essentially craftful in amplitude and uh but what is different and I will show you is how we've integrated it to show both quant and quall in one place because obviously craffle was only quall. Um so at a high level sort of what it does is um you start from just connecting your sources of feedback and so you can connect support ticket sources or public data like app reviews or Google play and I think one thing that's that customers are reacting to quite a bit is just like how easy it is to connect things like if you wanted to connect app review all you kind of like enter your the name of your app and then it just pulls off and collects it. It collects that data every day and then um and then it gives you these prioritized lists. Um so you get like your list of feature requests, top feature requests from across these different sources. In this case, I have connected a bunch of like public data about Slack. So you have like um iOS and Android reviews for the Slack app, YouTube reviews for Slack, uh and I think they also have like Twitter Slack mentions. And so uh what you get is this like list of here's their top feature requests, the top feature request that was mentioned 242 times across all the different sources as notification preferences and controls. So you can click into that and see the actual like what did users say about that? And then you can look at these like deep dives that tell you within those 242 what were the common topics and use them as filters um to look at that. Um the and you can do the same thing with like uh like you can look at top complaints um in the product uh which of course is again too many notifications but then pricing and uh communication things like that. You can look at um to the contrary like what are some things people actually like about the product? Um and that's usually helpful for like strategic planning to see like what should we double down on. Um yeah, — or like what brands came up frequently. That's usually helpful to see how do um how do my customers talk about my competitors? That's often what comes up. Maybe there's like some big integrations that also show up there. But then the like to go back to um feature request. The cool thing is that because this now is an amplitude, we can take these 242 users and create a cohort and um look at how these users are using the product or look at related session replays of like how users are interacting with their notifications and see both kind of what they did and what they're saying in one place. or we can create a survey and ask more questions about notifications and then that data will feed back into AI feedback and be analyzed alongside of this. So it kind of like makes your feedback richer and richer. And then there's like a few other — Yeah. — Uhhuh. — Yeah. So that's those are kind of like the new parts. So like what we didn't have in craft what we now we're able to do in amplitude because we have both pieces in one place. Yeah. And then of obviously like sometimes you'll just want to see like what are people saying on Twitter um and or like for my iOS app and so you can filter it in different ways or like look at the first last day if you have — I think um I I think combining that feedback with the metrics is actually pretty it's actually pretty amaz I don't think there's another product that does this right is there I don't think no I don't think there is — yes there isn't we're the first we got to do the first because it's sort of like I mean It's a cool like that's essentially sort of like the the biggest reason behind the acquisition is to be able to bring this all together in one place and really paint that whole picture. Uh which is really cool. — Um — that's awesome too. So I can make a cohort from any of these uh users and then I can look at all the — funnel metrics or whatever I want to look at for that. — Yeah. Exactly. And study how they're doing. Like you can look you could set up Yeah. You set up a funnel for notifications uh notification management and and look at that. Um and yeah and so and so like then some other ways you can interact with this is that maybe you want to ask some very like specific questions like uh you may want to like the thing is probably what we have because it's Slack and we all use it. We know that there's a bunch of things about notifications from across like different things both feature requests and complaints and uh various topics. But like uh we could also ask what do users say about notifications. — Mhm. — So it's going to pull up all of that feedback and give us a summary of like what users are doing with notifications. But then you can actually take it one step further and say like take you using all this data about notifications. Now write a PRD for me based on all this data. And then it will write a PRD that you can go ahead and um and like edit in the product. Oh, there it is. Okay. So yeah, so you can see kind of like critical issues, usability, frustrations, and then some positive notes. Um, so there's and then you can kind of like act on it and do more. — That's great. — Yeah. So there's a few different ways to interact with this. Um, the what I just demoed is actually in our MCP. So you can do this in Amplitude, but you can also just do it in in Cloud or Chat GBT and um and pull in that data in other places or like schedule uh notifications using agents and things like that. So it's pretty there's kind of more things you can do with it beyond kind of what automatically happen. — Okay. So I guess this like saves a bunch of time uh like okay number one saves time manually copy and pasting feedback to get AI to summarize stuff like this is all done for you already. — Yes. — Yeah. — Uh trying to classify into different uh categories that's done for you too. — Yes. — Um and then yeah even the PRD step you just have to get the cross functional alignment step done then it's the perfect product. — Yeah. Yeah, we just need Exactly. We just need to replace all those humans with other AIS and then it's going to be so great. — This cross functional alignment. Yeah. Like a cross likelihood of getting cross functional alignment score or something. No, I'm just joking. Yeah, — that would be great. That would be awesome. Yeah. And then so the and the cool thing is that it does gather all this data every day. So you can filter it by just like the last day and look at what did people say based on my last launch or what did people say — in the past few weeks when you're doing sprint planning or like quarterly planning. So really like it pulls the data daily and it updates your list uh based on what's now showing up in the data. — Yeah. I I can see a lot of potential to expand this too. Like I feel like maybe eventually it can even do replies to the customers or something. — Yes. You know. — Yeah. Yeah. We had that on our road map at Craft. So it's now come over to Ampatitude on our migrated roadmap. — Okay. — Definitely want to be able to close the loop with customers. — It and it is so easy, right? Because you you're really you're getting these like uh we know like the 200 users that have requested this. So it becomes really easy to close the loop. 242. Like I built a much simpler version of this too. Uh sunrising feedback and um yeah I feel like AI is actually even arguably even better for the qualitative use case than like the quant stuff — because the quality of stuff like you know there's like a lot of copy like it's really good at summarizing copy and like extracting trends and stuff and insights — and the numbers is like it's not really good at math, right? It's not. — No. Well, it's getting better. It's getting much better at math. But you're absolutely right. It got better at summarization before math. You know, I think for me having built like the first prototype of this like in early 2020 uh and been and seeing the evolution of LLMs from like from the early days, it was terrible at text summarization back then. It was really good at text generation, but it was terrible at text summarization. And so eventually this was sort of like an unlock that came I would say sort of like — probably like early 2023 late 2022 I say probably the Vinci model like right before the the GPT 3. 5 which is like the chat GPT the original Chat GPT model that's when summarization use cases started really working somewhat it was still really hard you had to do a ton of prompt engineering to get it to do something and so math I feel like we're just at the cusp of like it actually getting better at math or it's actually getting better. Like coding was like probably like last
37:38

Why AI analytics is lagging (and what needs to change)

year was when all the coding use cases started to get unlocked. It's just like we're just seeing this like we're on this path of model capability going from AI not working at all to like AGI. So we will like with every new model that comes out there's always like this capability unlock which is really cool to see. Yeah, that's actually good because my last question for you was going to be about like you know why haven't we seen like you know like a $29 billion AI analyst company yet or something or maybe we have but like — but maybe the answer is just like you know the model if the models catch up then like you know it will just unlock a whole bunch more use cases — for exactly I think that's exactly it I think I mean I think that's half of the answer is that it's been like the the models that we'll be able to new AI analytics really have just started to come out like the I think that those use cases are just starting to get unlocked like this is the time to build that um and we're starting to see that more and more like the stuff we're building now that of amplitude things are just starting to work in ways that I just know that like a year ago certainly couldn't have worked um but that I think that's part of the problem I think the other piece of the problem is user adoption uh because the persona that tends to use analytics has built up a lot of workflows around that in a certain way. That's harder to replace and harder to like change those workflows than it is to replace a workflow that just has to deal with like document editing or you know like writing copy or something like that. That's just like so much easier. Like it's like workflows where folks are like writing blog posts or something like that's just like such an easy thing to come and replace whereas folks have like built up so much context around how they're doing their data analysis. M — I think that that's a much harder like user adoption problem that again we're starting to see that with like early AI adopters within those spaces are starting to try to think about how would I use AI for this but I think it's just like it is a more difficult puzzle to get adoption there. — I feel like a lot of data scientists get asked by their annoying PMs like hey what about this data or what about that data or can do can you run this query and like I feel like that kind of stuff hopefully can get automated first so that they can — yes — do more interesting work. Totally. Yes. Yeah. Exactly. So yes, so that's a but I think like for it to be a really big solution, it needs to be able to do all those things and for all those people. Um so that's partly like you know why why not like that really big like uh disruption hasn't happened. I think that that's the reason. — Yeah. And also another reason I think is like you know just like you can't get the data wrong, right? Like you can't really have too much on day data. You got to get the data correct like almost 100% of the time. — Yeah. Yeah. Exactly. No, the quality bar is super high. It's really important that it's actually correct. Like you can't be like, "Oh, you know, we had uh like I don't know your daily active users were off by just a few digits, but didn't really matter, right? — Like it's so important. " — Yeah. It's like the AI companies claiming they have 100 million AR. Maybe Oh, it's actually 10 million AR, not — Yeah. Exactly. — Yeah. Um — they're probably just using AI. It's not their fault. Yeah. Okay, cool. All right. Well, Jana, I mean, you're really busy, but uh where can people find you and your product? — Yeah, mostly right now we we've gone all AI native and we're on Twitter all the time. Um and so, um you can follow along Amplitude HQ on Twitter. Um and then I am I'm there as tweets. I need to update my handle to be like Jana X, but I sound so bad. — So, I wouldn't do that. And yeah, we're all now trying to be on Twitter quite a bit. Our um founder CEO Spencer Skates is on Twitter. You can follow him along. Um he's just Spencer Skates is the handle. — Yeah, I heard he really wants more followers. So maybe uh everyone watching this — should go follow him and say this episode. — Yeah, then he'll compliment Yana for sure. Yeah. — Cool. — Love it. Thanks, Skater. This is like it's exactly what I needed. — Okay, cool. All right. Well, thanks so much for your time. This is awesome conversation. — Really enjoyed it.

Ещё от Peter Yang

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться