The Business of AI
43:22

The Business of AI

OpenAI 13.11.2023 49 647 просмотров 892 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Learn the surprising things that leaders at Shopify, Typeform, and Salesforce focused on to build and launch successful Al products (spoiler: it's not just picking the right LLM) Speakers: Aliisa Rosenthal Head of Sales at OpenAI Oji Udezue - Chief Product Officer - Typeform Kathy Baxter - Principal Architect, Responsible AI & Tech - Salesforce Miqdad Jaffer - Director of Product - Shopify

Оглавление (12 сегментов)

Introduction

All right. Hello everyone. My name is Aliisa Rosenthal. I'm the head of sales here at OpenAI. Which means I have the privilege of working with our customers and partners every day all day to help them figure out how to integrate AI into their products for their end users and within their own organizations. I have a few esteemed customers joining me up on stage today. First up, we have Kathy Baxter, principal architect of ethical AI practice at Salesforce, where she develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Salesforce is, of course, a close partner to OpenAI, and we've been collaborating together on a handful of initiatives. Welcome, Kathy. Next up we have Oji Udezue, the chief product officer at Typeform. Oji has a stellar background in product management with chapters leading product at Twitter, Calendly, and Atlassian among others. Typeform launched Formless earlier this year, totally rethinking the generic form experience with AI at its core. Welcome Oji. Finally, we have Miqdad Jaffer, director of product at Shopify. Miqdad is responsible for the group's integrating AI across the Shopify product and platform, including Sidekick. Welcome, Miqdad. We're so grateful the three of you could join us today. Today's conversation is going to be about the business of AI. That is all things about integrating AI outside of the actual coding. On the sales side, we get to work with our customers all day, and we hear several different themes that tend to come up over and over again around customer management, experience pricing, and go to market, that you really need to consider when building a durable revenue-producing product or business. I'm excited to dig in. Let's start off with a one-minute pitch from each of you. We will start maybe with Miqdad and work our way over to Kathy. First question is, what is the hardest part of building a product with AI?

Hardest part of building a product with AI

I think the hardest part for us has been trying to figure out what that final mile looks like. I think it's very easy to get to 70% product. I also think that the way that you develop AI products is very different than the traditional software development process. In process, there is a linear flow, there is expectations around when things will happen and how they will get to a good state. With AI products, it's non-deterministic. I think you start out with this is the goal that you have for the product. In our case, it was how do we use AI to accelerate entrepreneurship, and how do we then integrate it into the various parts of the product and make sure that there is alignment across the org on these are the right use cases to worry about, this is the right way to approach it, and this is what good will look like. As you work towards that, there's going to be a part in which the product might not meet that final mile. How do you account for that as you build, and how do you make sure that your users still manages and maintains control as you build? The hardest thing. [chuckles] Look, there's a lot of work it takes to build an actual product, a lot of the technical stuff, the stitching, the prompting, and so on and so forth. What may be the hardest thing for us is the idea that we want to stay true to our mission, our vision for the customer, that we want to make sure that AI isn't just some random layer over what we want to do, but that it's really prescient about the workflows of the customer. For us at Typeform, our goal is to make the web more conversational, more human. We feel like we need to really use AI to enable humans to do better work or to give them superpowers in very human ways. I've used human many times, so I feel bad about that [laughs], but we want it to wrap around our workload, want it to matter. We want people to come back to it 80% of the time because they love it. We have all these testimonials as we've done fFrmless, and also we brought AI into Typeform about how people feel like it feels natural to them. That's the biggest accolade because we sweated that a lot. Kathy. I think in my opinion, one of the most challenging things is developing at the speed of trust. As this technology is advancing so rapidly, a lot of the things that we have identified to bake in trust and ethics and responsible AI from the beginning, we learn over time that maybe they're not as efficient, they're not as effective. There was a recent study that found that with fine-tuning models, sometimes it can unintentionally undo some of those safety or human alignment elements that you are adding in. Constantly doing evaluations, staying abreast of all of the research that's being done, and ensuring that as you are developing these tools, you are always keeping trust at the core of what you are developing. Moving at the speed of trust. I like that. Oji question for you. When you launched Formless, that was a pretty big move. You launched it as a standalone product versus integrating it into your core function. Can you talk a little bit about how you weighed the trade-offs of that decision? Just going back, so our goal - Has anyone here used Typeform before? Typeform makes beautiful web forms that you can use for zero-party data collection, lead generation, and all of that stuff. Part of the inspiration, by the way the co-founder is right there who created Typeform. David, how are you doing? The core of that is conversational. There are all kinds of forms on the web. Even the Google thing is a form. The chat thing on ChatGPT is really part of a form, but we wanted to make it feel human. Now, when GPT came out, David has been working with GPT since 2. 0, but when GPT 3 came out, something clicked for us. We just realized that we could transform this entire vision ultimately. Now, we had two choices. We could build it into Typeform, 150,000 customers, a lot of revenue, but it was going to be slow. We're going to be doing retrofits. What we decided to do was instead disrupt ourselves, create a whole brand new product that was optimized for two things, speed and learning. That's what animated us. Look, we're going to have to make decisions later down the line about how those things come together. Actually, we are building AI into the original product, but we are trying to build a race car that allows us to go fast, allows us to learn as fast as possible, allows us to experiment unfettered, allows us to imagine-- I can't tell you how much code we've destroyed because it didn't work on Formless, but that's why we did it that way. Wow. It takes a lot of courage to disrupt your own product. I'm curious Miqdad, did you all weigh any of the same trade-offs as you thought about launching some of your AI initiatives?

tradeoffs of launching AI initiatives

Yes. For those that aren't aware actually, maybe you are, has anyone bought anything using a purple button on Shopify store somewhere, show of hands. That little purple button is shop and we actually started there. We started with the shop. ai marketplace. This is where the buyer can go on and search for any given product and do it by an event. We said, "Okay, let's put semantic search in here, and let's try and go from the perspective, as Oji said, the speed to learn as fast as we possibly could. " We put it in front of buyers and tried to figure out what they would ask and how they would ask it. In some cases, it was help me plan a dinner party. Or it's I'm trying to build an outfit, what's the right things to do? Using that to build an initial search. When we started getting a little bit of progress there, we said this is something that we want to be able to put everywhere. We have a tenant within our annual planning or even longer than that, we have this notion that we will bring technology to our merchants as early as possible in the cycle. It already fit within the ethos of Shopify. Now the next step of it was how do we get this everywhere? What are the right places to put this? Where will this utility be realized? For us to deal with some of the safety concerns, we just had a four-word solution, human in the loop. We just made sure that no matter what we generated, no matter how we produce something, we always put it in front of the user to be able to interact with and respond with. Our principle is that the merchant controls their message to their buyer and generative AI is a way to augment their ability to get there faster. We started with the areas that we thought would be the biggest need and saw how users would interact. They asked for more of it, so we tried to put it everywhere. Then we've moved into the phase of, how do we rethink the entire company? How do we think about how a user interacts from a previous imperative where you're clicking and choosing and filling out forms to a declarative one in which you just simply state what you want? We knew that it wasn't going to be possible given where the technology was there, but we trusted that the technology was going to move fast enough that by the time that we got there, we'd either be behind the technology or we'd be right there with it. It was always a matter of keeping up and making sure that our merchant's use cases were being solved as we went. You mentioned using AI everywhere. What is an example of a way that you're using AI that might not be obvious to the outside observer? Sure. We have one feature for using generative AI that's just auto-write. Anywhere there's a field or a form, we allow users to add a couple of keywords and declare the voice and any kind of special instructions they might want to do, and they can generate any sort of content. That one's probably a little bit more obvious. Another place we've done is send-time optimization on an email where you might not know when to send a thing for the highest open rate or highest click-through rate, and so we take care of that for you. Craft a subject line so that it'll get more open rates. We have things from a Sidekick perspective. For those that don't know, Sidekick is an AI assistant that's across the admin. What we've used it for is both curating help as well as directing the back office so you can do things like, say, "Change my theme to make it look more like summer," and having the semantic understanding of what that design would be, as well as the understanding of the tool itself to be able to generate the code necessary to make that happen. I think those ones are a little bit more like, you can use the AI as a design aid, but we want to move into a stage of it's a strategic coach. I think that that's where things start to differentiate.

balancing need for innovation with ensuring that AI is safe

Great. Kathy, question for you. Salesforce has really been at the forefront of building AI products for business workers, for the information worker. I'm curious how you balance the need for innovation with ensuring that AI is safe. Absolutely. We've been working on AI for several years but really started focusing on creating AI ethically and responsibly since 2016. It was just a natural flow from our core values of trust, customer success, innovation, equality, and sustainability. It wasn't a large leap to go from that to more specific trusted AI principles, which we published in 2019. At the beginning of this year, as we started putting generative AI into our products, we recognized that we needed more specific guidelines to help all of our teams think about how do they put this technology into B2B products? Then we came up with a set of five guidelines for responsible, generative AI. Accuracy was the first one that we really had to prioritize. Everybody as a consumer wants to ensure that their search results or maps or anything else that they're getting, that those answers are correct. In a business setting, if you get the content wrong, it can have material impacts on the business, it can have brand impact, legal impact, safety impact. Really focusing on that and then ensuring that it's safe, honest, empowering our users, and then also, of course, sustainable. All of this has driven every decision that we are making from within the product, how we build our own models, how we leverage OpenAI in our products, as well as the UI, the design, giving our customers the tools to empower them to know, is this content that you're getting, is it accurate? Is it trustworthy? -Great. -Can I just jump on the [? ]? Please. You sparked something in my brain, which is, I remember I used to work at Twitter before Typeform. When we were building new things, if you make $100 million or $1 billion from something, you're just so careful about it, you craft all these rules around it. This is not advocacy, by the way, but I think that we're in an inflection point with AI where the two things matter, right? Your ability to be creative and your ability to understand your customer. The combination of those two things is what unlocks value. Because we always underestimate absorption rates of our customers. One is creativity, one helps you with the absorption rate. Ways to learn quickly, to apply fewer rules so that you can learn, and then integrate that into the main body of development is so critical. It's so critical. Optimize for learning, optimize for fewer rules, then don't be unethical. I'm not saying that. [laughs] I hope not. It's really important to do that and that animates our approach at Typeform. Yes, understanding your user is so core to that. We talk a lot about mindful friction. We want to slow the users down enough that there isn't just this mindless trust like, "Oh, this must be right," and just submit, but to actually check it. We don't want to put in so much friction that it's viewed as a speed bump. It's an annoyance. It becomes like banner blindness. It's not useful at that point, but creating the mindful friction, creating the signals for a customer support rep in a call center where some actually have a countdown timer to get those customers off the call as quickly as possible. Very different experience from a sales rep generating an email to a cold call, or a marketing campaign manager generating a campaign, or developers writing code. Understanding your customers deeply to understand how they need and how you can best empower them from the beginning is so critical to getting it right. Not doing that just launch, move fast, and break things. That doesn't work, especially in a B2B scenario. Yes, I like the mindful friction. Miqdad, anything to add on the enterprise side of the equation?

putting AI in front of people

Yes. I'm going to go on the opposite side. I tend to believe that putting this product out in front of people and seeing how they interact with it and how they break it is actually how we can learn to make it more ethical and apply that appropriate friction. The customer service example is great. We actually produced a chatbot for our help center. It was previously just search for articles, find the article, see what makes sense. We introduced a chatbot on top of it, took all of our data, made it into a set of QA pairs, and created embeddings from that. Then we said, "Let's see how people interact with this. " The first thing that happens is someone comes in and whatever their normal path of search was going to be, they'll just start typing, and then we'll form it into a conversation and return the reference results as they go. Then they will ask for clarifications, they'll hit a wall somewhere, and then we will add a human into the loop and let a customer support person take over. The goal is to solve the problem. It's just that sometimes the way that the individual is searching or the way that they are identifying what they're looking for isn't great. That's where an LLM is great at interpreting the semantic meaning of what they want and then getting to something else. Is it the best experience? Is it always right? Probably not. I think the fun part about this is that how many people's problems does this now solve? I think that number continues to improve. When we get into metrics later, I'm sure we will. We'll talk about how we show that this is effective. For us, it's pairing an LLM with really good UX and really good engineering. It's got to be both. The engineering side of it is very critical, and I think probably the majority of the work. The beauty about the way that OpenAI keeps doing things and my whole world map just changed, is to be able to-- They react and they give you the framework around the LLM, and that box keeps getting bigger. The sort that doesn't get bigger is your interactions from the LLM to your application and to your user. The UX is critical of forming the right basis. I do not want an entrepreneur to have to worry about AI. -That's not the goal. -That's right. The goal is for the entrepreneur to get better at what they are doing, and that's it. They don't need to know what's going on behind the scenes, they don't need to know AI. That's my problem, not the merchant's problem. I'm going to say amen to that. That's the way that we focus and we continue to push the boundaries of what's right on the UX side of it and the engineering side of it. On the LLM side of it, we just plan for every error possible. Where we don't have an error, we put a human there to deal with it. That's what's really important, is that human backstop. It's like if it fails, then the user is left struggling. No. That's where the human steps in and that's really critical. You're going to do experimentation, always having that safety net to catch your users is so important. 100% It keeps us honest as we go through and a merchant can always turn it off if they want to. For us, it's important that because we don't know some of the use cases that will get solved, I'll give you a very trivial anecdote. We had a Polish entrepreneur reach out to us and they said, "We're really excited about your product descriptions product. " All it does is it generates a product description, 50 to 100 words based on a couple of keywords. Initially, it launched only in English, and then we expanded out to other languages. They said a thing I wasn't expecting. They said we were scared to sell to English markets because we worried that the way that we wrote English wasn't good, but you solved that for us. I was like, "If we didn't try, then we don't learn that one lesson. anecdote is out there and there are hundreds and thousands of others like that. If it makes 1 more entrepreneur, 10 more entrepreneurs, it's a worthwhile effort. " I love that example.

pricing

I want to switch to a hot topic that we get a lot on the sales side, which is pricing. As we know, GPT-4 is not cheap though it just got a lot cheaper. Hooray. We get a lot of our builders wondering, "How do I think about the pricing? Should I bill it back to my users? Is it an add-on? Should I eat the cost? Do I build a standalone product? " I'm really curious to hear Oji and Miqdad how you weighed the trade-offs here and what decision you ultimately made and how to price this in your products. -Should I start? -Go for it? GPT-4, we were excited about the rest of the world when it launched and it was better on all the dimensions that are public, but it was expensive. [laughs] I think fundamentally we thought that AI, the way we think about it is time to value. Because it's time to value, shrinking our customer's workflow that we already know by 50% 100% whatever that is, we needed it to be for our customers. How we thought about pricing, first of all, was to say, "Okay, this is better and better is good for our customers, but it's really expensive. " We x-rayed the functionality we were trying to deliver and tried to find out what was really important to do with GPT-4 and what was not GPT-3. 5. Then we spent a lot of time manhandling GPT-3. 5 to be really, really good at the things that we didn't need GPT-4 for. That allowed us to do more experimentation with pricing. First of all, AI is baked into the product. Even at a base level, we are B2B SaaS, PLG. People just walk in the door and sign up, but we still want to make sure that their first taste of that power, that acceleration is GPT-powered, but then if you really want to do things that require GPT-4 exclusively, then our pricing gets a little more dear for the customer. The other thing I'm going to just leave you with is sometimes we think of pricing as sacrosanct. As something that people if you change it a little bit, people will be upset. I think that's a dirty lie. We do a lot of price experimentation in Formless, but also in Typeform because the value proposition to customer is always changing. Some use case is more valued, some are not. The question is, "How do we adjust that? How do we impedance max that to customer's perception? " Experimentation in this age of AI, especially when models-- When 5 comes out it'll be really expensive. This is an evergreen topic. How do you price, how do you experiment with different ways to price? Even now with doing some experimentation on pricing. That's the way we think about it, is just reach deeper than just the cost of the LLM or whatever version you're using. That's us. I think you guys are a little bit further along on the pricing side of things. What I'll do is I'll walk you through just how we've been thinking about it. Shopify has three principles by which it operates. Principle one is "do whatever it takes to make the merchant successful. " Principle two is "make money doing it so you can do more of it. " Principle three is "never swap the order of one and two. " We'll always start with the notion of what is the problem that we're solving and what is the best way to be able to solve it. With this, what you do is you really line up the incentives. The products that we release, the features that we actually green-light are the ones that are going to move the merchant in the right direction towards success. When they are successful, we are successful. However, it needs to make sure that these aren't unsustainable costs for us. In some cases, GPT-4 can get expensive and GPT-5, I'm told is going to be cheaper. Is that right? I'm just kidding. We can hope and dream. I think the idea is that we did the same thing that Oji's talking about. We looked at use cases for GPT-4 versus use cases for GPT-3. 5 when GPT-3. 5 Turbo came out and it was significantly cheaper, we're like, "Okay, how much can we lean into this because this will be a really good place for us? " Ultimately, what we're trying to do right now is give everything to everybody. We're starting with everyone gets it, and we'll start to see what some of the patterns are of usage. Where there are potential and problematic instances, we'll look at adding UX for it to add friction where necessary or where appropriate. The other part of it is we will likely introduce something but what that is right now is not clear. I think we're still sorting through what the specifics are. As the usage patterns become a little bit more obvious, we'll work with that. Great. All right, another question. It seems like everyone is building chatbots. That has definitely been the product du jour. I'm curious, what are some novel ways your teams are integrating AI? Maybe we start with Kathy and work our way across.

integrating AI

We have such a broad range of industries that we support, not only our cloud, sales, service, marketing, of course, our developer tools as well, and then we also have our individual industries. Developing bespoke solutions for each one of those industries. We announced at our Dreamforce event, Copilot. Creating assistants that are bespoke to each one of those kinds of applications. We've also talked about, on our AI research science team, developing large action models. Being able to create a whole series of models with an orchestrator that just goes off and does individual, smaller tasks for you, but they're coordinated together, and being able to identify the specific actions where we want that human in the loop to make sure that particular piece is reviewed and approved, but all of it can work together seamlessly. That's one of the areas that we're working on right now is creating those bespoke kind of solutions.

AI user experience

I can't wait to get my sales team on your Salesforce Copilot. That sounds great. Yes, I have so many thoughts on this. I would just say that AI user experience is a thing. Maybe I'll say something even spicier here. I don't know that I believe in a text-chat interface for all of AI, basically. Nothing, obviously, I use ChatGPT every day. If this is truly to be pervasive, we have to get really innovative about how people interact with AI. It feels like everyone is saying it's chat. I don't believe it is. We actually know a lot about chat. If you're old enough, you understand command-line interfaces. There's a reason as Steve Jobs and Bill Gates created mice and interactions and so on because it's just less calories, less cognitive load. Customers don't have to be as creative. I think AI/UX is important and is the thing that's going to differentiate from chatbots. I'm very impressed with the Assistants API, the demo app. It feels new and fresh. People will have to keep innovating in that space. That's what I really believe. Because you have to lower the bar of how people interact. People don't want to think. For them, technology is now utility level, and so it just has to work. The other thing I would say about is that I've said it again, but I'll repeat it because I think it's important. Study your customer's workflows. If you're a product person or a developer, then you've heard about specs and use cases, no, forget those. Workflows. What do they do? What do they walking? How do they actually accomplish their tasks? Rap around the AI user experience around that, okay? If you can do that, you will unlock more value. There's a lot of great ideas in AI. We've done them, we've released them, we've talked about them here. Our industry forgets about absorption rate. It depends on how you wrap it around. I'll stop there without being specific, but there's still more to be done on how AI interacts with humans. Typeform is about making technology for humans, not humans for technology. We believe in innovating in that space. I like that. I might steal that from you one day. I'll give the continuum of why I think chatbots are the thing right now. I would say that most people have been building applications from the traditional sense of, "I want to solve a problem, here's how it works, here's what makes it work better, here's how I add features to make it more operational. " I think the switch now is, "Oh, AI is easier to get to. " It's much easier to throw a large language model in there and tell it to do a classification problem or prediction problem and see how it does. I think we're on a continuum right now. We're starting in the understand the user's intent, then it's predict the user's action. I think that's the continuum we're on right now. Chatbots come in to explicitly get the user's intent. "What would you like to do? Tell me what the weather in San Francisco is and whether I should bring a jacket. " Cool. I'm starting to understand but now I have the context of when that intent was made. It's like, "Oh, they were here. This is what the situation was and this is what they asked. " Now I have the potential to predict what the intent was and why it was there. Then you go into the next phase of it of potentially predicting the intent and going right into the application layer of bringing forward. Here's an action you could take based on where you are and what you are currently doing. Then it's a matter of automating that same intent. It's like, "Okay, well we know what you're doing. We know what the right way to do it is. Here's a potential path that will get you to the final stage. " I think that's the continuum we're on right now. A lot of people are in the how do I make chat do a thing. I think it's useful and I think it's really helpful for people to feel they have much more control about what the set of actions are and potentially have a better way to describe what they want when the UX falls apart. The thing I want to caution people on as you build, do not let AI be a substitute for bad UX. Make the UX great. Make the AI orchestrate the good UX. If you have a bad UX problem, solve the bad UX problem. Don't worry about AI being put right on top, bandaid it on top of it to solve for that problem. I think that's where people get a little bit tripped up and it's not one or the other. It's both at all times as you move forward. Can I compliment that demo app? It says create and then it says, what was that toggle? It was create and then configure. Then you start with a little bit of chat and then the configuration is fine-tuning it and then it had all the UI on the side. I thought that was really good. a good sequence. It walked the walk of customers. I'm sure if I thought about it, maybe there are things I would do. I thought that was very smart. Thanks. You guys have some good UX people out here. Great. Just to hit one of the potential novel cases, I think there's a lot of here's the customer, here's the solve, here's the things that we can do. It's very user-facing. There's a lot of internal use cases that are interesting and very compelling. One example of it is we have a massive migration to do. In order to do that, we have to take one form of code and switch it into kind of an object-oriented extensible platform. We have hundreds of thousands of variations of how people have done things with code and we have to figure out what the right extensions to create are. We just wrote a classifier that takes all of the code and identifies all the opportunities for extensibility that exist today versus ones that we need to build. That gives us a hit list of here's what we have to build. It's like we put it as part of our internal process too, just to accelerate but you have a classifier at your fingertips. You have a generator at your fingertips and a large language model can be used in many, many places. For us, it's about accelerating even our own business flows. It's not just about the external user, it's also about the internal user making it faster.

Using AI internally

Yes. I'm curious, are either of you using AI internally right now to help with your own jobs that you want to share? Yes. I'm a product person. I'm an engineer even further back, but usually, I'm in the product and design realm. One of the things that I use a lot is the copilot tools, but also the native code interpreter inside ChatGPT because it just allows me to very quickly make myself more productive by writing a small tool. I wrote a Chrome extension the other day that I've been plotting for like a year, but it took me 30 minutes to do it. ChatGPT feels like a Swiss Army knife right now in many ways. I use it in creative ways.

Rapid Fire Questions

I do a lot of writing and I give a lot of talks. Sometimes I hit a writer's block. I remember there was a talk that I had agreed to do like three months in advance and I kept kicking it down the road, down the road because it was so far away, and then suddenly it was the next week. Just that writer's block that panic hit me in that moment. Being able to pull up ChatGPT and just start playing with it and getting inspiration and talking about some of these topics that I've talked about for a while but in a different way, it was such a relief of just having that inspiration to get me started. It was quite a right lifesaver in that moment. ChatGPT might help write some of these questions today. Want to do a little bit of rapid-fire questions, a couple of questions, just answer in a sentence or two. We'll start with Kathy and work our way across. What is one AI myth you'd like to debunk?

Is AI neutral

I think there are still some individuals that believe that AI is neutral, that it. doesn't come with bias or opinions, but it very much does. We have to actively work to ensure that it is safe and representative for everyone. I think that people think that all these announcements, all these advancements in AI will maybe automatically guarantee products that make a lot of money. That's amazing and the world will change, but there's a lot of work to be done at the product and design layer to unlock this actually getting into the hands of the people whose world you want to change. It's not just prompting, there's a lot of design in it I want to change the narrative on. I'll go the opposite way of myself usually. I don't think AI is a silver bullet and I don't think that the level of confidence it projects is warranted. I think that that's something we still have to manage. Next question, what is your go-to metric to measure the success of your AI products or initiatives?

Time to Value

I absolutely appreciate user feedback. For every piece of AI-generated content, we have that very familiar thumbs-up, thumbs-down. We also have a drop list where we ask if the user selects thumbs down, what is the reason it was inaccurate, wrong, voice and tone, toxic or unsafe, other. Being able to get that feedback from users and know where the miss was to be able to help us just continually improve our products, I think is incredibly powerful. There are two that I think about, but I already know that Miqdad-- You'll steal my thunder. No, I won't, because one leads to the other, so you can talk about a second one. I won't take your thunder. The first one is time to value. For things that are conventional, using technology of today, I'll say pre-AI, or pre-generated AI, how can you compress it? How can you make the time to value even shorter? It took 30 minutes to activate. How can you make it in five minutes? Typeform, for example, you can take 20 minutes to make a really good form. How can I take that down to a minute and it's still as good. Superpowers, like cold dead hands, territory of holy shit. Time to value is a huge one, and then you'll talk about what it leads up to, I guess. I think whenever you do products, it always breaks down to two core needs. We're saving money or we're saving time. You hit saving time. I think that for us, one of the big things is sustained adoption. It's people don't necessarily have to pay for a feature to let you know what's valuable. I think people pay with their time. If people are using it consistently and the people are getting value from it, they will show you that through sustained usage. Regardless of how good you think your product is, if your users don't use it, then it does not matter. I think that that's why that's a pretty critical one for us, is just sustained usage. I think it's very easy to say that these things are going to be hot and popular and people will try them but if your curve looks like this and goes to zero, then what do you really have? That's right. Great. I want to end with final thoughts or takeaways from all of you on the future of AI product development. Why don't we start with you, Kathy, and work our way across? This technology is an amazing opportunity for businesses to become even more efficient and productive in what they are able to do and then pass that value down on to customers. I think ensuring that all of us understand what it means to do that responsibly is critical. You can't outsource that, your ethical responsibility, to a single team or a single individual on your team. Every single individual that is building the product, that's responsible for the marketing of it, sales of it, everyone needs to understand what it means to create, implement, and use this technology responsibly. I think on one hand, you can be confident, on the completely, we don't even know what's going to happen. I do think that AI will infuse every single thing, every single tool we use to build products today. Maybe in non-obvious ways. That's number one. Two, it feels like a reset for the toolset of creativity, I think. Every dev tool, every coordination tool, communication tool, the things that let people work in teams, I believe will completely change in the next five years, or even sooner than that. I can't predict exactly how they will change. I am confident that your collaboration innovation workflows will change completely. I guess before I hand it over to Miqdad, I'd say expect change, anticipate it, maybe lead it if you can, think about ways that these things can change your workflow. Maybe do a startup to allow teams to collaborate better in this environment. We have Typeform Labs, and we're constantly imagining that actually, because we have the luxury of doing some of that, and that's how we created Formless, that AI. Those are my thoughts. I swear we didn't know each other before this. I think that past research is an indicator of future acceleration. If you look at past research on machine learning, you can see that there was a start of a spike around 2020, and it started to climb in an exponential curve towards 2023 ish, even more so now. Then what do you know, ChatGPT shows up on the scene end of 2022 and we get acceleration after acceleration. I think that if you look at the amount of research that's being done right now in the space of machine learning, you have subscription models that are, here the daily number of papers that have been introduced. It's no longer a matter of a paper a week, it's seven a day, and it's difficult to keep up. I think that what we need to understand what that means for us is that all of the problems of today will be gone very quickly. Whatever you think is a problem right now of, "Oh, this is very expensive," people like OpenAI come out and they reduce the cost by 10X overnight. Then you have, oh, the latency on this is not tenable for my users. Then all of a sudden, GPT-3. 5 Turbo comes out, and all of a sudden everything is quick. The context window is too small. Oh, 128,000 token context comes out. Every problem that you see today is going to be solved whether next week, I don't know what your next conference is, or thereafter. [? ] a little break. [laughs] What we need to understand is that if you miss the boat right now and do not start building experimenting, there is an exponential curve. It will take you so much more effort to catch up later on. Build now, experiment now, get your users in front of it, and you will learn much faster than if you wait for everything to be perfect. It'll get there, I assure you probably before you will. Great. Well, on that note, I hope you all enjoyed the session and feel inspired to go build AI in your products and your organizations to build enduring and high revenue-producing wildly successful businesses. Thank you.

Другие видео автора — OpenAI

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник