How OpenAI's Head of Business Products Uses ChatGPT to Save Time at Work | Nate Gonzalez
45:56

How OpenAI's Head of Business Products Uses ChatGPT to Save Time at Work | Nate Gonzalez

Peter Yang 15.06.2025 12 996 просмотров 245 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Today, I want to share a new episode with Nate Gonzalez. Nate leads ChatGPT for Work which is now used by 92% of Fortune 500 companies. In our chat, he reveals how OpenAI runs with less than 30 PMs, what they look for in new hires, how he personally uses ChatGPT to save time at work, and more. Timestamps: (00:00) 92% of Fortune 500 companies already use ChatGPT (02:06) OpenAI's latest features for ChatGPT at work (14:52) Why OpenAI has less than 30 PMs for 5,000 employees (15:58) What traits OpenAI looks for when hiring PMs (18:31) The 10-minute AI hack that changed how Nate works (25:56) The most surprising thing about working at OpenAI (29:48) The biggest barriers to AI adoption and how to overcome them (38:36) Using ChatGPT roleplay to prep for important meetings (41:04) ChatGPT's future: From assistant to trusted coworker (43:06) The specific skill that will keep your job safe in the AI era Get the takeaways: https://creatoreconomy.so/p/how-openais-head-of-business-products-uses-chatgpt-at-work-nate-gonzalez Where to find Nate: LinkedIn: https://www.linkedin.com/in/nate-gonzalez/ Website: https://openai.com/chatgpt/enterprise/ 📌 Subscribe to this channel – more interviews coming soon!

Оглавление (10 сегментов)

  1. 0:00 92% of Fortune 500 companies already use ChatGPT 402 сл.
  2. 2:06 OpenAI's latest features for ChatGPT at work 2632 сл.
  3. 14:52 Why OpenAI has less than 30 PMs for 5,000 employees 209 сл.
  4. 15:58 What traits OpenAI looks for when hiring PMs 493 сл.
  5. 18:31 The 10-minute AI hack that changed how Nate works 1546 сл.
  6. 25:56 The most surprising thing about working at OpenAI 784 сл.
  7. 29:48 The biggest barriers to AI adoption and how to overcome them 1736 сл.
  8. 38:36 Using ChatGPT roleplay to prep for important meetings 532 сл.
  9. 41:04 ChatGPT's future: From assistant to trusted coworker 430 сл.
  10. 43:06 The specific skill that will keep your job safe in the AI era 558 сл.
0:00

92% of Fortune 500 companies already use ChatGPT

What does it look like to build a company on top of AI? We serve 92% of the Fortune 500. Nate Gonzalez, head of business products at OpenAI and like what kind of traits do you think uh you know the Open AI PMs that you hire have or what kind of traits do you look for? Entrepreneurialism. Uh do they have just very high grit and determination and are willing to work really hard problems? And there's this quote that I love, never underestimate the power of what you can get done or the amount that in 10 minutes worth of time. And it's really just stuck in my brain of what can I do with the model to make myself go faster and get these things done and just keep progressing this forward. You you're like at the top 1% of using AI to improve your job, right? You're proud of your job. So like maybe you can name like three of your favorite AI workflows that you personally use. Uh okay, welcome everyone. I'm really excited to have here with me Nate Gonzalez, head of business products at OpenAI and over 92% of Fortune 500 companies already use ChatG Enterprise. So super excited to talk to Nate about how OpenAI builds products, you know, what traits he looks for in PMs and how he uses ChatGpt personally to save time at work. Welcome Nate. Hey, thank you Peter. Uh really appreciate you having me. Uh so excited to dig into this. Um another thing just like framing wise like yes 92% of Fortune uh 500s uh leverage our enterprise product we also have millions of uh of smaller companies and you know enterprises and mid-market companies that uh that leverage our team product as well. So there's both this pro like an interesting motion that we have that's a mix of self-s serve and our goalie like sales manage deploy process so that with larger more complex customers we have our uh our sales team is working directly with them to go get value out of the product. Um so we cover both ends of that spectrum. That's awesome. And I'm sure there's also millions of employees just using chat GBT at work just like their personal accounts. So you know there is plenty of that as well. Uh yes. Okay. So um you just had some uh big
2:06

OpenAI's latest features for ChatGPT at work

launch this week, right? So you launched connectors and record mode. Y uh maybe you can if you talk about what these features are. Yeah, exactly. So I mean the core So first let me talk about what it is. We can kind of get into the genesis of it is if that's interesting. Um the just to start out what we launched is the ability for companies to connect to their internal knowledge sources. And so you know tangibly what that means is let's take something like Google Drive or SharePoint. uh the either the com at the company level kind of at a service account or individual level say OOTH the user is able to connect directly to that underlying data source we respect the permissions that live inside of the organization and then the model is then able to uh pull and read from that data source so that if I ask a question that uh that would really benefit from chatt having knowledge of something that's going inside of my company I'm able to get that information that. So in addition to all the pre-trained data that chatpt leverages and the ability to search the public web. So you kind of get both like the large fact base of human history with recency from the web. Now you're adding in uh the private knowledge side. So that's kind of the large part around connectors. Um two uh two pieces of that. We've launched uh four different connectors to the biggest internal knowledge sources uh sorry knowledge stores. So your uh your G drive, your Sharepoint, Box, Dropbox available in chat PT directly and then I think about uh I think it was 12ish uh connectors in deep research uh so that you can both combine the ability for deep research to go search the web with the number one requested uh feature from our or feature enhancement from our uh users of deep research which is like this is awesome. I now want to be able to search over my internal data as well. So, I'll pause there and then uh we can see if there's any follow-ups and we can get into record mode. Um yeah, maybe you can talk about um I mean launching 13 connectors, you know, is a lot, right? Like maybe talk about how this process went from like kind of ideation to launch. Yeah, sure thing. How you built this. Yeah. So, the ideation on this started a while ago. Uh it's this I mean it's not a it's not an unreasonable thing to think out of the gate that okay what's going to like the models all get much smarter with more context right and so to be then contextually relevant for your business being able to access data sources and have knowledge of that within your business is really important and even more so is a foundational building block for us to get agents to a place where they have very high accuracy and fidel in the actions that they can go take. So be those read actions where you're asking an agent to go do a whole bunch of things for you. You know, chat is a good example of kind of that agentic capability to go do deep research to go pull uh internal knowledge for you and bring that back or the ability you know to eventually start to take right actions with agents where now they can go execute on your behalf. So operator is a good example of this type of capability. Both of which benefit from this context where it's you know now I'm asking a question I'm directing the model to go do something and the model has the internal context to operate that. So that's that was the general idea in fragging. Now when this came about this was prior to the launch of reasoning models and I think it was what Novemberish November of last year and so what we really had was like the 40 paradigm which is very much based on you know the very quick call response where a lot of you know pre-trained and post-trained knowledge exists the model is calling on um and so that was how uh we were initially thinking through the connectors and so it was like great okay we really need to think about you know how are you syncing and indexing repositories it becomes a you know like a really important aspect so that you can drive against that latency uh and make sure you have a really low latency really high quality experience. The interesting thing the shift with the reasoning models is that latency constraint uh is relaxed to a degree because the model has multiple different turns to go get you the right answer before it brings back that information. So, you know, reasoning generally works as you ask a question, the model goes out, it forms a hypothesis, it looks at several different uh variants of that hypothesis in tandem and pulls them all back together. Wow. And so this is how we evolve to be able to scale more quickly on the connectors in addition to adopting uh MCP. So, pause there and see if that gives you kind of broad context on how we got to where we are. Yeah, that's super helpful. I mean I think um every company has a you know wide range of internal knowledge databases but I think the quality of some of this data could be like pretty questionable like some of this stuff is out of date or so there's probably like a lot of thought that goes into you know if I ask a question like which files does it pull and like which part of the files is that yeah exactly so we did a lot of post training on that specifically you know leveraging our own data leveraging synthetic data uh where we look at that notion of recency uh and also you know kind of um say seniority of authorship. So the notion of a social graph in there as well of to be able to like surface the most relevant content. So like just think about somebody who's new and who's onboarding onto the product directly uh or sorry onboarding into say open AAI they don't have any you know any deeper context on what's going on. There might be a hundred documents that have been written about a specific subject. So, we spent a ton of time working to make sure that the most relevant doc documents are being pulled forward by the model and you're not just getting a generalized return of here's the 30 different documents written on the topic like go figure it out. Got it. Um and uh yeah I think I mean I didn't play with myself but I saw the video and I think it's amazing. It unlocks so much. You should that uh I mean it's you should tell us what you think. Would love the feedback but it's for us like we've been playing with a lot internally. I mean for months as we've been you know in beta and dog food and just really trying to drive the experience we use it every day not just like oh how do we like dog food the experience to drive our own workflows and that's I think the really important for thing for us internally how do you know like I think there's this book called uh thinking fast and slow from Daniel Kman and um I think maybe it corresponds to the reasoning model versus 40 like how do you decide what to use with this product or like you know when do you want to think fast and slow? Yeah. Yeah, that decision boundary uh does exist, right? And to some of you, this is like post- training on understanding the intent of the user question so that we can then kind of figure out, okay, great, what's the depth of searching that we need to go do to be able to get the right answer to this question. And so like the generalized paradigm is right like 40 is thinking fast. It's got a whole bunch of knowledge. If you ask me a question on a topic that I know, I can give you a really quick like summary answer. if you ask me a question on uh you know a deeper question like hey like do a really deep dive analysis on this industry I probably want to think for a minute structure that and bring it back to you and so that's a natural decision boundary that exists right now that we uh push into the connector's product and the connector's experiences got it and like you know some of the most relevant and recent information in your company is just like the meetings that happen right and the notes you take during meetings so that maybe you can talk about record a little bit So record mode is, you know, you could think about them as two distinct products, but part of the reason that we launch these and we talk about these together is it actually kind of rounds out that knowledge picture. So if we go back to what I talked about earlier, you have the pre-trained knowledge that the model has that then you know you post train against for specific use cases to make sure that you're really good at answering those questions. Uh that's the fast thinking paradigm. You have the you know the reasoning paradigm that allows you to then go deeper which benefits from public search also benefits from now uh connectors to your internal knowledge stores and that's all the information that's written down. And so the big other source of information to kind of round out the picture of what is institutional knowledge look like is the like what we are doing right now like what is actually spoken uh in meetings that doesn't necessarily get recorded doesn't necessarily get you know even transcribed accurately. So you have shorthand notes that you're pushing out to colleagues and you're trying to define action items and the like. And so what we've done is there yes there are many companies that have built you know an AI version of recording meetings. There's a ton of fun stuff to go into there that we're going to keep driving into to make that much more featurerich and meetings much more first class. Now we think about this. But we also wanted to have just a generalized capability for anybody to record because the purpose here is actually to take that information and model it just like internal knowledge that you might have with say Google Drive. And so then in the future I can recall from that. I can actually say great what did Peter and I talk about two weeks ago. And then I can pull back that summary and then we time stamp that summary so that like the individual action items that would be listed I can actually go to the underlying transcript from there and so I'm able to get the richer context to the degree that I need to kind of traverse between those layers. Yeah, I mean I think that there's advantage here because like you know meeting notes is just like another knowledge source that a company has. So if you have just like a pure play AI meeting product it's not as comprehensive as like what this is right. It's like another part of knowledge you can get. Yeah. That's right. Um and how did you guys uh maybe like go a little deeper like uh you know evaluations are really important for these AI products. So how do you guys decide whether this thing is good enough to launch or like how what kind of stuff do you look at to determine how good the meeting notes are? Yeah. So the evaluations are really important and we need to set that initial like you know it's like any metric that you establish you Yeah. first step to align on what the rate of northstar metric is. You have to then be able to measure it, get a baseline and start to be able to hill climb quality wise against that baseline. And so that is the process that's in play there in terms of then assessing what that bar looks like. A lot of that is working both internally in our beta process where we are testing this product every single day. We use this internally all the time. So there's very much this notion of like great what does the eval look like? does this actually work in practice as we go play with it? And so like there's the combination of those two elements and then also external alpha and beta testing that we've done with customers as well where we take that feedback or understanding if customers are actually getting to value because there is a version of the eval world where you know you want to get to perfect before you launch and I think that's not the general ethos of what we go for. We want to get to here's a really like high quality bar that we set, but we want to get as quickly as possible to user signal because that's really where you you know your evals matter in a very deep sense is you know from when we get into product land making sure that users like yourself and others are getting the value out of the experience and is it kind of like just looking at stuff like you know like getting the user to tell you like how accurate the transcription is, how useful it is, like that kind of stuff. Yeah. Try to track it. Yeah. Got it. Yeah. I mean my theory of emails I think it's there's accuracy the transcription that we look for. There's different ways and signals that we do that. Um so like we will get the qualitative feedback directly from users. We're able to you know kind of collect whether or not it was a good or bad summary and the like which starts to be able to kind of drive quantitatively. Um there's other signals that then we look for in terms of like how people think about follow questions and the like. uh whether or not we're actually being clear or returning right information. Got it. Yeah. I think my theory is that evol just requires like a lot of manual involvement from a lot of people. Just give gota get the feedback loop going. I think initially yes. Uh and I think it's a process that generally the industry is working to get you know much better. I actually just realized how important it is to actually build good products in this space given the fact that we need AI to be not just you know useful but now much more just reliable particularly in a work context uh where the focus is like great if I need AI to drive workflows inside of my company I need to make sure that it is reliable and accurate. Yeah can't have a hallucinate a bunch of me know that doesn't that's not good. Yeah. Okay. Um, and um, so let's kind of switch gears a little bit. I want to talk about like the product or at OpenAI a little bit more. And um, I found out uh, I think from Kevin's interview with Lenny that there's actually like uh, like 30 PMs or fewer
14:52

Why OpenAI has less than 30 PMs for 5,000 employees

at OpenAI. So I'm just curious like uh, why is the PM team so lean? You know, there's like I think there's like 5,000 people now, right? Or Yeah, there's a couple things at play here. I would say the most important one is we very much want to be the model of what does it look like to build a company on top of AI and that means how do we extend every single one of our employees and so yes our PM team is lean like our engineering team relatively speaking to if you kind of think about like the size and scope of the business also relatively lean and that's because we are leaning into all like every single day what can we do better, faster working with the models directly. And so it allows, you know, myself, my teammates, uh, to extend our own capabilities as PMS to be able to cover more ground. Got it. So, so basically like every PM has AI copies, chat, GPT or maybe even a agents helping them do the work, right? That's kind of the idea. And, and like what kind of traits do you think uh, you know, the OpenAI PMs that you hire have or what
15:58

What traits OpenAI looks for when hiring PMs

kind of traits do you look for? Yeah, I mean there's a couple that we look for uh pretty consistently here. It's yeah, entrepreneurialism. Uh do they have just very high grit and determination and are willing to work really hard problems? I'd say most importantly is product sense. So the ability to deeply understand user needs uh creatively brainstorm be able to justify solutions uh and then kind of balance between both the user level of you know fit for purpose the product that you're building to the problem statement that you crafted as well the business considerations. How are we thinking through what this actually means uh for our ability to serve consistently? Uh and then also like always that rubric of like how do you tie it back to the impact? How do we mission? And so like that is the thing that we want PMs to be principally focused on. We also screen very heavily for execution capabilities. Uh and then too there's this kind of like we want people who are very curious. Uh and so yes, it's great if they've got you know prior experience or deep experience in ML. Also great if they are just super high user empathy uh and able to dive in and understand what's happening in the ecosystem, how it's evolving, how they use it to better solve user problems. Yeah. Maybe you want people who like are I mean personally I'm always looking to like use AI to be more lazy or to save time. So you want people to is trying to integrate AI into their workflows, right? Yeah. Which I think it maybe is I wouldn't frame it as lazy as much as uh what can I go get done faster? Uh like I I'll just give you an example of like you know what how I like to use AI and like some interesting framings there. But like one element is like internal research. So I'm trying to get up to speed on what's happening on a you know specific project within our research organization or technical implementation of a specific system uh with within our engineering teams. This is now something that I can get up to speed on much faster without endless amount of meetings to sit down and understand. You're kind of pulling time from those other teams. So that's another element that lets us all run faster is you're able to onboard your own context. not onboarding to a new company but at the space sorry at the pace that the industry is evolving being able to consistently almost like reonboard yourself to oh great here's the next topic how do I understand that how would I might not now apply this to actual product work in the other part is um man there's this quote that I loved uh from this is you know probably corny but the acquire podcast uh on the
18:31

The 10-minute AI hack that changed how Nate works

IKEA one where you know one point in time the IKEA founders said something like you know never underestimate the power of what you can get done or the amount that in 10 minutes worth of time and it's really just stuck in my brain of anytime I'm like okay it's 10 minutes before a meeting like take a breath you know grab a snack or whatever it's like what can you get done in this 10-minute period like I've got this laundry list of things that I need to do every single day I've got that punch list like what can I do with the model to make myself go faster and get these things done and just keep progressing this forward so I think it's like it's less lazy it's much more focus is how do I become as productive as possible at leveraging models? Yeah, I usually like try to get as far as possible with AI, you know, working on a strategy document or doing research and then only when I get stuck do I kind of go consult my teammates and then like kind of start group again that way. So you can get quite far. Yeah. Um, speaking of researchers, I mean, um, uh, like you probably work very close to the researchers and like how do you plan out road maps when you don't really know what's coming or maybe you do like a couple months down the line on the model side? Yeah, a couple ways. There's I mean one it's you take kind of the traditional value based product operating model of like you know three in a box with you know PM design engineering and obviously our data science colleagues like we all kind of scrum together in a pod as you might expect and I think the additional part here is the research element. uh so we do work extremely closely with the research teams to understand what underlying research is happening the relative maturation of that we are working upstream with them to develop these products that we are putting out uh so it's there's kind of a stage process as you think about like research that is more nent it's having knowledge of that so we might think about like what kind of product experiences could we craft there and a lot of again those ideas can come from multiple different places they will come from PMs engineers they will come from individual researchers for what type of product capabilities ities are emergent. What could you now go build to kind of further drive the impact of our mission forward? So that's there's a lot of bottom up um uh of ideas that come out of that then what we want to do is drive into like our product road mapping process. Um so that's okay. Say I'd say kind of the main way that we end up working there. Uh yeah, I'll pause. Um and like how far out do you guys plan out the problem? Maybe on enterprise side you plan a little bit further but like you know this stuff is changing like every three or six months right? How far do you go on the road map side? Uh I mean we do we are we run a quarterly planning process. Uh the reality of that is you know as soon as you wrap the plan it's out of date and you you're really using it as a trade-off framework. uh and so here are the things that we believe are most important to drive forward uh the you know how we are getting in delivering value to users uh both you know obviously on the consumer side also on the business side now based on that what what's highest impact and so as new things roll out constantly assessing that list got it and how do you balance between like you know I think one of my pet peeves is like uh product organizers just spend a lot of time on internal planning and like internal reviews and they talk to customers. I don't think how do you what kind of advice do you give your team on that and find the right balance? We try to minimize the former as much as possible the actual process around the road mapping because again it's such a fluid constant process of iteration and development that there's it's not oh what we have no idea what we might go do next quarter. It's there are so many things that we could go do that we have pretty high conviction in. How do we actually focus on the right ones that have the highest impact? So that then necessitates the latter part of what you just said, you know, have we been talking and engaging with customers? Do we deeply understand the analytics around our products and like where they are good or where they are deficient and we need to like really lean in to make sure that we are getting the right level of impact out of those. Yeah. I think especially in enterprise maybe just talk to like five or 10 customers and you probably like that's kind of how you build products, right? They probably all complain about the same things or like probably have I mean we talk to enterprise customers I would say there's like you know I probably end up having four or five conversations directly a week and there are you know some of it is like letting folks know what we are doing what we're planning getting their feedback very directly and so yes then you still like very specific themes start to emerge uh and so you do end up getting that like there's that direct line to customers that both you know our product team has but then also we are very close partnership with our go to market team who are day every single day sitting with customers. We have a bunch of feedback loops to get that information back into product. And so then you have kind of you know multiple different sources that we're considering when we're going into planning when we're building which are direct user feedback um you know either you know quantitative uh research uh and then the feedback that's coming in from go to market. So we have a bunch of different uh data and signals that we use. And can you give the example of you mentioned how uh you know ideas sometimes come from the bottom up like do you have an example of that from your team or anyone else? Yeah. Again just good example here. Uh there was an IC uh back this was you know the canvas project and so in IC um I believe in the research team pitched the canvas idea in a first month of the company. Um uh I think it was you know somewhere around like July 4th break something like that you know her manager agreed immediately staff five to six engineers and like that team just formed it was like they're kind of like terraforming really quickly it's like hey there's a really interesting idea we think it's super high leverage these are the reasons why who's up for working like who wants to go join and like drive this forward. Um yeah, it's not like conscription like we need to go pull these people. It's a lot of people are like I want to go work on that problem. Uh and being able to kind of gravitate to like here's a really important problem. Let's go dive in and solve that now. Uh and so then canvas became kind of our first major UI update to what had been the initial chat PT release. And so you go from a place where you have just very basic chat interface to a much more rich you know experience that has a whole bunch of different applications both now and future um solely from you know the a single individual uh who was you know this was not part of her product roadmap or over a specific arena necessarily. It was you know the best idea wins and that's what we drive for kind doesn't really matter where it comes from within the company. Got it. So it's kind of like it's all about like being high a agency like taking the initiative making things happen yourself right. Yeah. And like Yeah. And I guess in the interview process you probably not looking for some like you're like some 10 year fan. You want people who are like you know have seen the failures and have the grit you know. Yeah. Got it. Yeah. Exactly. Yeah. Got it. Okay. Um and and like what is like I mean the company moves so fast man like just from the outside I just see you guys shipping like every other week. It's pretty wild and but like you know what how do you balance that speed with some other stuff? Like what was kind of like the biggest misconception about working at OpenI? Sure. I mean I think one big misconception um that I
25:56

The most surprising thing about working at OpenAI

like is that moving quickly like involves cutting quirks uh you know particularly around like say safety and like I think the thing to like be 100% grounded on is like our team principally is deeply missionoriented. So when we go back to like several questions you've had about how do we think about what a successful PM looks like in open AI the successful PM in open AI is 100% focused on what impact am I coming in to drive and the reason I'm here is because I want to go drive that impact so that question is front of mind as people are thinking through what we are delivering what we're shipping so that both pushes the urgency to ship quickly but also to ship responsibly so that when we you know there is a b there are a whole bunch of things that we pause on that are not ready to go out because we feel like okay there's actually a bunch of like safety eval work that we need to make sure that we are hitting the bar against before this so we will hold on so I think the interesting thing to like think about is like there's a whole bunch more that we could just ship there but we are actually kind of holding that bar very deliberately to make sure that this is of the quality that is necessary but again I think there's a culture around you know urgency how we use our tools um how we can go drive that impact fact bit that drives that underlying like kind of you know speed and execution velocity and so those are those things like in you could think about as being opposing constructs but in reality are not uh it's good to have that tension right too like I want to have the urgency but at the same time you want to uphold the quality bar and um yeah absolutely and like you know just building AI products myself like a lot of times I find like you know uh like this thing is like if only we had a better model then this thing would be much better like it only works like 60% % of the time. So, so let's just wait around a few months until open ships a better model. I don't know if that that's what you guys do. Months. Yeah. Does that happen internally too? It's like if only we had something better like just pause until the model No. I think it's we have line of sight or like belief or conviction on what the better model is that would be coming. So let's understand that and understand now what's the right first step to take on the pathway with what we have right now. Okay. or drive even more focus and urgency for how we accelerate uh you know our efforts to be able to get signal into that model so and kind of develop it to a better place where it interplays really well with our products. Got it. And let me just kind of ask a quick question about impact like when you say impact is it mostly about like business impact metric impact or is like is there multiple ways to measure impact here? I mean the main way that we think about measuring impact is you know on the like it's how many customers are we serving right but you know consumer side business side you know within organizations how many customers have gone wall towall with us are we kind of blanket default useful tool inside of businesses like are like utility is the really important thing for us to understand inside of businesses because I think going back to the mission the reason that businesses are critical for us is because that's where most of like the very valuable economic work globally happens, right? Yes. It's like that is the scale vector for how you drive kind of global impact, global change, uh in being able to make sure that we are working directly with businesses to help them transform their own industries, you know, get to drug discovery faster. uh you know elements like that those are just that has then just global impact that is super important for us and so that's kind of the you know high level way of how we think about impact that's true yeah if you know chat should be the enterprise sees it'll make the whole economy more productive you know hope hopefully yeah and so let's talk about that a little bit more so what uh what are some like barriers to like
29:48

The biggest barriers to AI adoption and how to overcome them

getting employees to use AI more in companies and like what tactics have been useful in kind of overcoming some of Yeah, if you rewind to uh to I'd say even last year, I think that the general vibe that was going on was there was a ton of experimentation within companies. Whole bunch of AI products had just launched. It wasn't immediately clear to uh to folks who weren't playing with these every single day what and how to apply these within the context of their business. And really what we've seen in you know going into 2025 is the shift into actual you know full deployment of these tools a lot of the where are the value driving use cases coming from um you know how do we find those how do we lean into focus our organization around those that's I think been the big shift that we've started to see and I have one trend that's driving that shift is the internal rise of AI champions and not you know okay let's get AI to everybody It's this is how I can help transform my business leveraging AI and OpenAI. We want and we're going to partner with you to go make that happen. So then this becomes the combination of the product work that my team drives against combined with the go to market work where we are sitting jointly uh with customers to understand that define that and make sure that uh they're getting kind of the right level of deployment to drive there. So you know specific examples are uh you know companies like Fanatics uh Madna Morgan Stanley where like on the Morgan Stanley side you they're leveraging a lot of our work that's now embedded into their wealth uh management services and so they were like cool here's the problem that we think is really interesting for us uh to go solve that's very high leverage. We will do a lot of things with you but we're going to focus most of our attention on this high leverage use case. Go crack that and we'll then move on to the next one. Got it. And these internal champions aren't necessarily like the CEO or the exacts. They can just be some IC or some employee, right? We'll have Yeah, exactly. There'll be heads of AI. They'll be um uh sometimes heads of different product divisions often like CIOS of companies who want to drive transformational change. There's it it's a mix of different champions, but it and I think that's the interesting thing too is it's not uniform, not dissimilar to the way that um uh we talked about with Open AI. It can be a little bit more bottoms up of there is an exec who feels passionately about what we can go drive and make a change. They're banging the table inside of their company and we are working with them directly to be able to map to that. Got it. Okay. So, so if I was like uh some employee or like some execut like I'm like hey like I really want my employees to adopt AI as soon as possible like how do I like do you have like some steps that I should go through to Yeah. I mean a lot of it is like okay let's actually define you know broad adoption yes great but to what end right we like we can go drive broad adoption there's a whole bunch that we can do around like getting people to understand getting to initial value there's a lot of product work we can do to uh to drive people to value and understand like how are you uh if you're a data scientist versus an engineer how do we think about the improduct work that we do to drive toward value quicker All right, like understanding your use case, what jobs are you trying to get done? And therefore, how do we then curate uh the experience for you? It's like that's a big area that we dive into on the product side, which answers kind of a general question, but I think the specific question that you asked is then great. What use cases inside of your business or what business processes are highest leverage to focus on? And then wrapping back to with the tools that we have and that we are building out and at the speed that we're advancing, what are the one or two bets that we're going to go take internally and dig in on how we then graft AI into those workflows because that's again these one or two vectors of initial change that drive outsized value for those organizations. So I'd say like the general advice by if I recap that is broad deployment get this in the hands of every single employee so they get the familiarity because you want them to be fluent in these tools because that's actually how you drive some of this bottom up culture that we have in OpenAI into you know any business. In addition, you also want to be able to find the couple of use cases that you think are just going to drive outsized value for workflows that you have inside of your company today and then focus on those and that's where a ton of our go to market attention is then specifically focused and I guess some of these use cases are kind of what you prioritize right like you know looking up internal wiki sources streamlining meetings like this kind of stuff right is that yeah um I would say no not necessarily I think those are all tools tune in. It's more I have like let's go back to the Morgan Stanley example, right? It's we have a wealth management product. We think that we have an internal metrics for what success looks like there. We have internal aspirations for how big we could grow that market to be. Now based on that, how do we now structure a product experience that we think is going to be better? How do we now evaluate that and drive progress against that specific goal? And so there's something like connector, something like record mode are then you know building blocks elements that help you drive that outcome. Like these are tools to drive an outcome. It's like the purpose is not just tools for tools sake. It's making sure that you can identify the highest leverage you know product opportunity you have within your own business uh and then be able to go push this forward. And have you seen uh employees at these companies build their own like custom GPTs or what AI workflows that like get broad option kind of like you know from the bottoms up right yeah exactly so um Maderna is a really good example of that I mean I trying to remember it's thousands of GPTs if I'm not mistaken that have been deployed internally uh and again it really speaks to that bottoms up adoption culture the ability to create a GPT share GPTs with their colleagues. So you know they get kind of the collective benefit of the knowledge work that they are doing being exposed and extended through GPTs. So I'd say that you know a really common pattern that we found and that that's another one where that was driven by an internal champion within Madna. Uh this you know and then pushed down you know you kind of get this like hey here's an opportunity for you all to go build and then that's like many different flowers bloom within the company. Yeah, I mean I think that's one thing that I think OpenAI really has like being able to share these GP like I have all these like great pro projects that I want to share with my colleagues but like some of the other providers like don't let you share it easily so it's like such a pain to ask the GBTs make it so easy. Yeah they certainly do help quite a bit. Okay so uh I want to talk about I mean you're like at the top 1% of using AI to improve your job right your job. So like maybe you can name like three of your favorite AI workflows that you personally use to save time at work. Yeah, there was the there's the one I mentioned of um doing the internal research uh to like to get myself smarter, more fluent up to date. Um you adjacent to that is external research. I'm going to be chatting with a customer. You know, we serve 92% of the Fortune 500. If you kind of think of it like across the industries that spans, being able to great, how do I understand their company, their context in a way that while I understand our products and our product context, we can start to find that mapping so we can get to value much more quickly. And then I think there's the other ca there's like the daytoday uh side of this of I've got you know all just the the productivity side which is not just yes there's drafting emails and memos and Slack messages and trying to make sure that I'm just cranking through those processes much faster but there's also a lot of internal data analysis that I'm able to do with the tool directly. um the be able to the ability to like understand code uh that I'm seeing from engineers uh and you know extend my own skill set. So those are the areas that I really focus on. You're like uh looking at PRs and stuff. Uh at least I can understand like great. Yep. Let's uh let's understand what went through there. Yeah. I'll tell you personally I use this stuff too like you know I don't know if you do this but I go to a conference room and I start like talking to uh chat or AI. I'm like, "Hey, you know, I got this feedback on my document. You know, okay, what did you think? Here's some context. " And I kind of work with it back and forth. There's that. I think I I would say like one thing that I do with voice mode
38:36

Using ChatGPT roleplay to prep for important meetings

specifically, u similar, but it is a lot of roleplay. So, great. Um, you know, I there's going to be a specific, you know, like candidate that I'm really keen on that I want to be able to talk to. like let's think through like the question and answer or there's a customer interaction that come that's coming up that's really critical like let's work through that. Um there's a podcast uh with a guy named Peter that we're going to be going to like play that role and let's work through like how we go together. So there's a whole bunch of work that you can do to start to have the model assume another personality so you can start to just hone your own craft and skills and get really crisp on your messaging. Okay. So you basically like set up a project or something and then you like upload a bunch of context so it can be like Peter or be like whoever you're talking to. You can do that. Uh yes, it's you're uploading the context again. This is like where that ability to like if you kind of even go back to the top of the conversation be able to pull in things like connectors, you know record mode of past meetings that we have the summaries against into a share into a project and be like great I want now voice mode to be able to help me understand you know has that background context. Yeah. So then now it it's able to kind of like play this role or be a thought partner. I will often have it critique uh you know something that I've either written or that I'm meant to speak about. You know got it. Here's my initial draft like what am I missing and or where could this be stronger? What are the weakest parts of this argument? So you get beyond drafting to the actual quality of the output. Yeah, that that's exactly what I do with it. I have like a coach project I and I do this specifically in Chat GBD because Chachd has memory. So like I have I tell you about my life and then you know every quarter I check in with it like get some advice. It's more patient than my wife is you know so it's yeah it's good. Okay, so let's wrap up by talking about um the future of product teams, man. Like um we're clearly seeing this trend where maybe AI is going from like co-pilots where you have thought partner stuff to you know at some point you can like dedicate task to AI and then you can go watch Netflix for an hour and then come back and it'll finish the task, right? So like so yeah. So like how is it just going to change the PM role? Do you think every PM will have a multiple AI agents that I manage or how's this going to the goal for us is extending everybody's productivity and so that's a lot of what we talked about of like what do PMs do inside of OpenAI today and so
41:04

ChatGPT's future: From assistant to trusted coworker

it was really thinking about co like chatbt is this extension of yourself of the team as like a virtual co-orker so you can imagine like waking up in the morning you sit down at chat BT more of a personalized interface list of interesting like or like tasks that have come in and then you being able to sit down and be able to say great like these are the ones that I want to delegate out to you, you know, bring them back to me when they're done. I'm going to go like, you know, I'm going to take one through three on this list uh and have the model be working against those others. Now behind the scenes that could mean orchestration with a lot of different agents, but I think like the focusing the interaction on with chat PT kind of employee by employee becomes really beneficial to just reduce cognitive load and really have the model do a lot of that. Okay, now I know I need to go deep research, use deep research as a tool. Now I need to go use operator as a tool. And so there's a lot of interesting ideas in there. Got it. Okay. So, it's almost like a it's like a hub where you have like some interns working for you and then you they kind of synthesize information for you and like do some of the grunt work for you. So they can I mean interns who can you know do PhD level math and deeply understand any code that's been written and you know so I'd say more than intern right I think it like it felt probably much more like intern you know 22 three parts of 24 I think as we've gotten into reasoning models and better u UI paradigms and UX paradigms with those reasoning models um I'd say it's much more of like not just a like a an intern but actually a co-orker that you're working with that you trust to go get work done. Got it. Okay. Um that that's very exciting. Yeah, I can see the UI evolving beyond just like a simple, you know, what do you want to chat about today interface. So, really excited for that. Um so, last question. Um you know, uh I think there's like some angst about like, you know, are we all going to have jobs? Why collar jobs in a year or so? Like for people who want to level up their AI skills or maybe join OpenAI, like what
43:06

The specific skill that will keep your job safe in the AI era

what's your advice? Like a lot of advice is just like go try the tools, right? like maybe have something more specific. Yeah, it's not just go try the tools. It's how do you make these tools an extension of the way that you do work? Like that's the important thing. It's not just okay, great. I kind of know these things all do. It's how am I using these tools every single day to extend my own capabilities? Like that's the really important thing. It's again, it's not how do I write emails faster. It's both faster? How do I write better emails? How do I actually think about the content of the work that I am doing and improving the quality there? And that gets into back to kind of the fluency of understanding. Okay, you know, if I ask it to write me a draft email, okay, if I ask it to critique this and point out weak points in the argument, point out key assumptions or logical fallacies that I might find, it actually allows me to improve. And I think those are the loops that you want to find. Not just productivity loops, but actual like quality improvement loops in your own thinking, your own process, and your own output. Got it. Okay. So, it's like PM1 is like you got start with the customer like what what's your own problems? Like what do you spend your week on and what can AI help on? You know, that kind of stuff. Um and um if I'm inspired by this conversation, I want to use chatb officially at work. Like how where do I go? Yep. Um get started really quickly. you can go to chat PT uh team and again we made that easier for a lot of businesses so SSO we enabled uh was part of the announcement on uh on Wednesday so that's kind of the front door come in you'll be able to self-s serve get started really quickly and as you want and need much more you know advanced compliance features that work with the the go to market teams directly to kind of find and map uh you know use cases to value chach enterprise is like the really strong product there. Both of them we have um the same privacy guarantees. We never train on your data. Uh so the data in the workspace for either team or enterprise is you know never makes it to training pipelines. It's your data. It stays inside of the workplace. uh you have then a lot of the control and particularly on the enterprise side a lot of advanced security and compliance controls to make sure that you're deploying AI both safely and responsibly you know relative to your information security but also the compliance regime that you might uh be working in uh for various different industries. I really love how uh your team not only it's not just like hey here's like access and go use chat GBT it's like what are specific problem your company has let's solve it together. I I really like love that kind of attitude. That's exactly cool. All right Nate well thanks so much man I learned a lot from this conversation. Good deal. I really appreciate the time. Take care Peter.

Ещё от Peter Yang

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться