# AMA: Scaling AI Applications into the Enterprise

## Метаданные

- **Канал:** OpenAI
- **YouTube:** https://www.youtube.com/watch?v=WrANK9oFfHw
- **Дата:** 08.10.2025
- **Длительность:** 27:09
- **Просмотры:** 4,868
- **Источник:** https://ekstraktznaniy.ru/video/11218

## Описание

Join a live AMA with Decagon & Clay founders and an Andreessen Horowitz investor on cracking enterprise AI adoption—why many enterprise AI pilots fail, what founders get wrong, and how they scaled to become unicorns.

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Hi, my name is Kimberly Tan. I'm an investing partner at Andre Horowitz and I'm very excited to be moderating the session today on AI applications in the enterprise and how you actually make it work. Um, I'm joined today by two of the leading enterprise AI companies today, Decagon and Clay. Um, I'm sure a lot of you in the audience have heard about these two companies already, but in case you haven't, DecaGon is an AI agent company that builds AI agents for customer support and have really rethought how you actually do support from the ground up. I think a lot of people have historically viewed support as a cost center in companies and they have really decided with AI you can actually deliver a concierge customer experience for all your customers and have really rethought how to do that. Um they today they serve enterprises like Hertz, Chime and Dualingo just to name a couple. Clay for those of you who don't know is an AI go-to market company that helps people turn their growth ideas into reality by using uh data enrichment or intent signals for go to market into automated actions. um their initial idea um to do so was actually founded pre-AI but they're one of the companies that has leveraged AI the best in the new wave and serve companies like HubSpot, Canva and OpenAI itself. Um I've had the privilege of knowing both of the founders for years. Uh we Andre Horowitz led the seed round of Decagon a little over two years ago and I've known Verun the co-founder for over four years. So very happy to call them both friends. Um and very excited to welcome them on the stage today. So, welcome Jesse and Verun, the co-founders of Decagon and Clay. — Oh, we can go. — Oh, wow. Okay, — we're friends for four years. — Great. Well, maybe just to kick it off, I'd love to hear from both of you, maybe Ruin, you first, then Jesse, you second about just the initial idea of why you wanted to build your companies and how you thought about using AI in order to do so. — Uh well, the original intent for Clay was actually how can you build the um how can you give people an order of magnitude uh more power and how they can uh the power programming to order magnitude more people. And uh initially there was a much more horizontal vision and actually the original use case of clay that gathered like commercial traction was data enrichment for cold email marketing agencies. If anyone here is starting a company I wouldn't recommend pitching Kimberly on that. Uh that probably wouldn't get funded. Um and then over time we were able to evolve that um into the go to market platform that we have right now. I think you know exposacto it's like very easy to see the things that changed in the moment we were just taking one step at a time but I think AI has actually changed a couple of things in our market um one it's actually enabled usage based pricing which makes it so that in go to market you don't have to charge by seat which means rep um so that means you can actually uh cater to a more technical elevated audience like rev ops or growth um so that you can build systems around go to market and then I think AI has also changed it so that sales is no longer one to one, it's now one to many. And so, u you can now build systems for growth using Clay. Um and so I think AI has made these changes that have helped Clay really grow. — Yeah. Our founding story was relatively straightforward. So I started a company before this that was a consumer company uh also a 16Z company and we um we grew it fairly quickly but we didn't have any revenue at the time. it was just pure users and we had a lot of users and of course when you have a lot of users they write in there's like issues with this or they have questions and so on. So I had some empathy and some sort of experience with the customer support problem but um I would say that alone didn't really inform this company when we started this company we you know really just did customer discovery and so we talked to a bunch of customers we didn't really come in with any sort of preconceived notion of what we wanted to do. Uh and then when we talked to a lot of customers and these were generally larger companies um mostly tech companies to start out we just you know explored a bunch of use cases with them support kept coming up because we were trying to think how could Genai agents be helpful and how would you measure the ROI and so on and it just turns out that you know the nature of customer service as a problem is very well suited for Genai and you can also measure the impact that it's having very easily and so we just found a lot of poll there and that's when Ashman and I decided to commit to you know this idea and you know luckily this time um as we started building it's just everything has gone fairly smoothly and we've been able to get more and more customers um and then we partnered with Kimberly uh and then yeah things went downhill from there but

### Segment 2 (05:00 - 10:00) [5:00]

awesome — well backstage they were joking because Kimberly got married and and the co-founder Jesse's co-founder was the officient and so he was knocked out for a full week so — I did get him sick I feel very bad about that — so my co-founder was gone on for a week because he had to go to Kimberly's wedding and then afterwards he was deathly ill for another week. — Yeah. So, — this is a value out investor right here. — On that note, um I'm going to start with a couple of questions, but obviously this is an AMA. Um so that's just to see the grounds for a little bit, but please use the open for questions thread in this session um in order to make this a true AMA. — By the way, the OpenAI people said no one is asking questions in the Discord, so they have um canned questions that people don't ask. So, please ask questions in the Discord. so we can engage lively, you know. So that's it's a good conversation. — Awesome. Um, so maybe just to get started, I know one of the big challenges with building AI products today is just the pace at which the market moves, like new models coming out, etc. So how do you actually in your companies assess the capabilities of the new models and know what you want to leverage for your own products while also being able to keep your infrastructure flexible enough to actually adapt to the new models and capabilities? Maybe Jesse, do you want to go first this time? — Yeah. So our agent you can kind of think of it's its goal is to do everything a human support agent would do and ideally more as well as you can be proactive it can talk to more people and so on but there's a lot of different surface areas so points where you call models and what you want to do is you generally want to come up with a set of evaluations for every single type of those use cases and so an example might be you know as part of the agent there is you know one model call that detects if it's going off the rails or if it's, you know, going off topic or so on. And so that could be like a small model call. And if a new model comes out, then uh we want to be able to very quickly run that through our test and see is this performing better or worse than you know the current benchmark. And same thing for everything really. I think the one nuance for us is that the eval could change from customer to customer and so actually as part of our product we allow our customers to define their own evals and they have their own test sets and you know they're almost like unit tests for them to you know make sure that their common use cases are still you know passing correctly and so generally when a new model comes out we have our team that goes in and just runs a bunch of these eval they kind of draw some sort of conclusion around you know if it's working well or not if it is then what we'll do is we'll kind of roll it out as an AB test. And so, you know, our customers have the ability to be like, oh, okay, I want to try GPD5, for example, like, all right, I'm going to roll that out, you know, to 10% of my audience first and then, okay, that's looking good. Let's roll it up to 50 and so on. — Yeah, that makes sense. I think uh for us, you know, on the infrastructure piece, how to keep that flexible. Um actually from the very beginning my co-founder Kareem built it so that uh you actually the integrations that we create the the models that we add is actually almost separate uh from most of the codebase and the core product team. So it allows for us to be super fast um in how we iterate really quickly and how we like add new integrations and they don't have to consult with the core product or the codebase in that way. So that was a decision we made 7 8 years ago um when we built the product um that allows us to move this quickly in terms of like how we stay up to date with all these models and how we think about it. Um first of all I'm not doing too much of that. Actually Mark and Jeff over there who uh Mark's the one with the really the main of hair. He'll let it down for you if you ask nicely. Um they lead a bunch of our AI teams. And basically what they do and the rest a bunch of other engineers do is look we're all reading the we're watching the keynotes here. we're reading the same news that all of you are. Um but I think also we're looking at what our users do. So for example, we launched Caggent, our web research agent. Um two three years ago and now it has billions of runs and at that moment you know they saw that this react paper came out. So maybe the technology was possible, but they were also looking at what our users are doing, right? And so our users are um they were doing like these um these agent loops. And so they were doing chat GP they were doing GPT3 pulls, they were doing Google searches, they were scraping websites. And I was like, okay, well maybe we can sequence that together and turn that into a web agent. Uh and so that's kind of how we iterate. And today, um you know, a couple weeks ago, we launched Sculptor, which is our conversational, um AI co-pilot. and and we're hoping that like the reasoning improvements in the models continue to make that better and we made that uh product bet the same way we made Clayent — I know one of the main issues when you build AI for the enterprise is that enterprises need guardrails right so how do you guys think about how both CIOS CTO's any exec should balance the like desire to experiment with AI and the importance of doing so um with actually having proper guard rails in place — uh I can go So uh it is a very non-trivial problem because when you're

### Segment 3 (10:00 - 15:00) [10:00]

selling to the enterprise you know different stakeholders at the companies care about different things they might be accountable for different things and you know one of those people is probably accountable for the AI not messing up and in our use case we are customerf facing and so that is quite important you never want to say something you're not supposed to and so on. So um there's a couple different points I would make on this. I mean the first is that as you're building it is very important to make sure that you can enforce guard rails and you know set those guardrails and customize them. Um one of our big differentiators and sort of like the positioning we've had in the market is that um we make it really easy for non-technical people um to self-s serve their you know customization of the agent and build new things and iterate and so on. And so part of doing that is being able to you know uh enforce guardrails and add their own guardrails. And so we created a format that we call AOPs, agent operating procedures. And they're kind of like SOPs, but you know, for the AI and that is helpful because now we don't have to be responsible for, you know, figuring out what all the guardrails are and like making sure we do all the discovery like they get to put in some of the guardrails themselves. Uh the other point I would make is I think there is a little bit of transition that's happening where when AI agents were first released in the enterprise there was a bit of like the Whimo syndrome where people are like oh as if Whimo just makes one mistake that becomes newsworthy and you know that that's really bad but in reality of course it's way safer than human drivers. So, I think with AI there is a bit of a shift now where people instead of looking for one-off errors, they're thinking more about like error rates and the error rates that you'll see should be dramatically less than the human error rates. And that's also one way to quantify risk for these companies. — Yeah, I think we do something similar with the operating procedures and giving the responsibility to users um in our product as well. I think like look I think a lot of companies have shared publicly about how they try to keep like their their core features u away from the AI coding but obviously you want to keep that things are changing so much in the space so you want to be able to iterate quickly adapt quickly internally actually some of our best features have come from vibe coding have come from hackathons and so you want to create space for that but every company is going to have a different balance and it's hard to say what is appropriate for clay and decagon could be appropriate for you Um I think that every company like for example DecaGon caters to much more enterprise customers than we do. They have different types of um agreements that they've made and different types of conditions in their contracts that they have to be compliant with than we do. And so I think it's just a balance and then you keep adding those governance and risk controls as you scale, get bigger customers and make more contractual commitments. — Awesome. Um, and you know, I'm sure like a lot of the times as you're trying to enter these enterprises, you probably do pilots or initial deployments. Um, and I know there's sort of like a statistic or a meme out there that you know, 95% of deployments in the enterprise today fail. I'm curious if you have any ideas for like what the main points of failure are when somebody is trying to experiment with AI and how you think that people should actually make AI deployment successful. Yeah, I think in go to market what we see actually is that look in go to market basically you have traditional orgs are operating in silos right with SDR is prospecting and AE is closing deals and robots maintaining systems what we actually see the most cutting edge teams do is they operate their go to market teams like product teams and so you have like these go to market engineers that are building systems on behalf of all the reps and scaling their most creative ideas and so what does that mean in practice so I'll give you an example um one of our customers Canva that Kimberly mentioned um one of the products helps you keep all your brand content consistent across lots of different social channels. So someone had an idea that turns out that you can use Clay to do social listening at scale, use AI to analyze those social posts, determine what's off-brand and on-brand based on brand guidelines and then message the head of brand and be like, "Hey, this is off brandand you should use our product. " That's timely. It comes with a solution. It's very clever and you can scale that across thousands of customers. Another approach is you could try to actually um improve everyone's the data that you have and then make workflows on top of that. So Plaid for an example, they like categorized all their companies into 75 very precise like industries and subcategories and then use that to develop agentic workflows and marketing campaigns. So I think those are different approaches on how you can go about it and go to market. Um I think the last point just be very tactical about things that companies do and don't do with AI deployments. I think the best ones treat it like product launches, right? Um, start small, do betas, expand from there. Um, I I think they only use they they're smart about where they get their data from for what's publicly available. Uh, and they make sure they have a human in the loop first. — Yeah. I mean, we see something similar. I often talk about there's like two big criteria that we have to satisfy or any AI agent has to satisi satisfy when you're selling the enterprise. Uh the first is that the ROI has to be very quantifiable. And if we get to the end

### Segment 4 (15:00 - 20:00) [15:00]

of a pilot and like people are unclear like what actually happened or how much money are they going to save or how much more make, then it's going to be a really tough sale because they have a bunch of other priorities. They have, you know, probably a bunch of ideas of how to leverage AI. And so for us, even if, you know, they test and they're like, "Oh, this is really cool. " Like we can definitely answer more things than we can before. If they can't quantify it, it's a tough business case. And so all of our deployments are centered around like okay how can you show that we are now you know resolving way more tickets than before and you need less humans as a result of that and not only that your customer satisfaction or your NPS score has gone up a ton and so that is um basically like proof that this is doing well. The other metric I would say is more of a like flexibility type thing where if you have to be perfect before you can go live and you have to you know basically uh you know test a bunch of things and get accuracy to be near 100% and get roll out to all the users then it's also very tough to go live. But if you can kind of break it down into smaller problems and you know go live to a portion of users first and you know customer service has this nice property that you can always escalate to a human and so that helps a lot with people feeling comfortable going live in production and so if you have the flexibility to go live into production faster then that's also very helpful. — So one question from the audience that I'd be curious for both of your thoughts about is AI is a very crowded market today. There's lots of people who do very similar sounding things. How do you guys both think about proving that your product is better and that it actually stands out relative to the competition? — Bin, do you want to go first this time? — Sure. Um, so go to market is this funny space where it's actually the most crowded in some ways because it's the biggest market in some ways. Uh, but we actually don't find too much competition also because we are going down a unique lane. Uh, and in fact, Jesse and I were just talking about this backstage as well in terms of how we think about our competition. And so, for us, we actually start with data as the wedge uh, when we go sell into enterprises. And what's nice about that is it's very quantitative. It's very, um, it's very clear uh, because you can run a data test to determine who's better. And there's a subset of competitors for that. Uh, and we win because we aggregate all the vendors and we use AI and creative ways to find knit new data points that you can't get elsewhere and to clean the data programmatically. So we do that and then you can uh compare it very clearly. Uh and then once we win on that basis then we use clay to automate a lot of interesting workflows and so I think that's like a wedge that we use to differentiate in a very clear way that's numerical. Um, yeah. So, I would say similar. I mean, customer service is one of the most obvious use cases of AI and so there's a lot of noise in the market, I would say. And when we first started, we were al we were almost like, oh, maybe we shouldn't work on this because like there must be a reason why it hasn't been built yet or there's going to be a ton of incumbents and so on. And it always looks noisier from the outside when you're actually competing. I think for us there are of course a lot of big platforms, right? like you know Google and Salesforce and the labs and so on where you know they all do things that are related to the space and generally the mindset I think for us at least is that um you have to win on product at the end of the day and if you do if you don't then you don't really deserve to win anyways if you can't out compete the big companies on product um if you do then it's it's like a heavy execution game and so we do a lot to differentiate our product a big part of it is you know what I mentioned before like really being able to uplevel the non-technical users and democratize the ability to work with these agents. Uh but yeah, maybe in 6 months it'll be something else, right? It you'll be something else. And it's really just about execution and pace right now. — Just to build on what Jesse was saying, by the way, like if you're not differentiating in a real way, then he he's right, it's just a rat race. It's just like execution and and you don't want to get into that. Um and so I think the way not to get into that is you have to think about the problem differently. You have to have like a different philosophy and a different mindset on how you think about the space and the market. So take our market for example there are different philosophies on how you can go about this right you could say hey we want to actually incrementally improve every rep with AI you could say hey we want to actually use AI to automate the entire life cycle what we say is actually hey we want to use AI but actually cater to one set of people GTM engineers revops people give them these primitives and help them use AI on behalf of the rest of the company uh and scale the most creative ideas and and scale away the manual work um and so I think if you have like a different way you're looking at the market um then I think that can also help you really differentiate — and in decagon's case is that differentiated point of view the ability to empower nontechnical builders or would you describe it slightly differently — yeah so I mean the if you look at the previous generation of SAS you know Salesforce is very successful at this a lot of companies are it's more about like hey let's invent our own configuration language

### Segment 5 (20:00 - 25:00) [20:00]

and you know build an ecosystem of technical people that can come in and you know build the configuration you know in the in the context of customer service, this usually looks like building a big decision tree almost of like you know if this happens do this and it's quite powerful but it's as you can imagine it's very slow and expensive to build and maintain and our whole viewpoint is that because gen models are very good now you it unlocks almost like a different way of building software where you can use natural language more you can be more instruction based you can kind of configure things just by describing it and writing these you know agent operating procedures in our case and it just creates a little bit of different paradigm and the end result is that you're able to get all these you know business users who previously would have been waiting on engineering resources to do something that they can now go in and do a lot they can both configure the AI they can you know run AB tests they can analyze the results they can like iterate on you know different types of you know models for example that we were talking about before and that's been the unlock for us — I think one other thing that's quite unique about both Clay and Decagon is unlike I think a lot of AI companies today you actually service a large range of different typ types of industries like you're much more horizontal in that um point of view. So I'm curious how you guys think about going deep in one vertical or one industry and like deepening the product there versus um creating things that are broadly applicable across industries. Maybe Vin, do you want to start? — Sure. Yeah. I we get this question all the time actually with investors mostly where they're like, "Oh, you should do other industries. " And then it's like this funny catch 22 where they're like, "Oh, you want to do more industries? " But then if you do more industries, they're like, "Oh, there's not enough depth in this one thing. " Right? So um I think for us we see a lot of opportunity just in technology alone just to start with. Now we have other comp customers um across lots of other industries like recently waste management even signed up right state department uses clay. So there's a lot of diversity but um let's start with the lowest hanging fruit and we know we can uh help technology companies and that are B2B the most. So let's start with that and then after we really penetrate that market very deeply and uh and feel really good and then go all the way up right because you start small you start with SMBs you start with mid-market startups you start with um later stage companies then you go to enterprises right uh non-digitally native enterprises then you can move to other industries and for us that would be like finance or healthcare or other things like that and so that's kind of how we approach it I think the horizontal vertical question is kind of product dependent and Our view on it for our space at least is that the winners will be horizontal because there are nuances between industries but there's not that many and fundamentally what people want is roughly the same which is can you automate these conversations for me and can you make the conversations higher quality and that's the same across vertical and so if that's the case maybe in the short term you will see some vertical players but eventually they'll get consolidated into horizontal ones and we have I guess it just has happened that we've been able to get customers across different verticals and we haven't really struggled servicing their individual needs and um so that's just the route we've taken. I don't think one is uh worse than the other. — Got it. Um another question of the audience is you know like all growth stage companies you have resource prioritization issues. How do you guys think about if you had an additional dollar to spend at the moment would you spend it on product improvements, go to market, general community building? like how do you think about incremental resource allocation Jesse? — Um yeah I would say for us we've always been pretty go to market driven and in the early days I think you have to be really go to market driven and in some ways you're also like short-term thinking because you're just optimizing to get to the next step because if you start thinking too long term in the early days you just like waste a bunch of time because you could be optimizing for the wrong thing. Uh and then as you grow I think now we are feeling like it's a bit more balanced. Um, and maybe in the future it'll be more skewed towards product, but I think for us right now, we have a big initial customer base and there's a ton of customers we still want to go after and we're talking to them, but we need to start thinking a little bit more long term because, you know, that allows our product to scale and that allows things to move faster. So, I think it's just I think it changes over time. I think we started a little bit more go to market focused now we're a little bit more balanced and who knows what will happen in the future depending on the bottlenecks. — Yeah, I think there's a direct answer and more of a philosophical answer here. Like the direct answer is we do have incremental dollars. We haven't touched our fundraising for a while and we don't plan to spend that incremental dollar um for a bunch of reasons. But I think the more like philosophical like we've always been a product driven company. We started as a productled growth motion as well and will remain a product driven company. I think now that we have a sales motion that's thriving, there's this temptation to scale the go to market really aggressively. And I think I would just, you know, temper that and add caution because it's fairly easy to scale a go to market motion in terms of the hiring of it because it's just easier to hire go to market people than engineers in this

### Segment 6 (25:00 - 27:00) [25:00]

competitive market. Um, and I think you run into risks there. where you can really paper over real problems in the business that products should be solving with people and that I think is um is really tough to resolve. And so for us like as we scale go to marketer we're trying to scale engineering one to one. — Got it. So we're basically out of time. I'm going to try to squeeze in one last question that hopefully you guys can give a pathy answer to. Um so someone in the audience wants to know just if you were to build an enterprise AI company today, what's the top piece of advice that you would give someone? Hm. Um, yeah, I have kind of a strong view just from my past experience that the most important thing is to not overindex on anyone's advice and try to understand yourself a bit more. And I feel like I really like fell into this trap in my first company because uh well I was a new grad. I was like super inspired by all these like stories that other entrepreneurs were having and you read a bunch of news articles or podcasts and so on and it's like super easy to be like oh cool that's a cool learning. let me try to apply that to what I'm doing. And it ends up being a waste of time or you kind of go the wrong direction because of that because you know everyone's situation is different. Every founder also has different strengths. People are in different spaces and different stages. And so yeah, probably the number one thing is just like do a little bit more like introspection and figure out where your competitive advantages are and what you're strong at and what you're weak at and go from there. Yeah, I I'd echo a lot of what Jesse said and just add to really just focus on your natural curiosity and focus on what you find interesting and not try to plan too far ahead. Don't be like, "Hey, I want to build a AI enterprise app in support and like reverse engineer every step that you think needs to happen to do that. " That's not how great companies are built. Just like um follow what you're curious about uh and then take each step from there uh and then follow the interestingness from there. And that that's what I would probably recommend. — Awesome. Well, if everyone can give a hand to Jesse and Brun. Thank you guys so much for joining us.
