Today, I want to share a new episode with Nan Yu (Linear’s Head of Product).
Despite only having 2 PMs, Linear has scaled to over 15,000 companies including OpenAI and Ramp. In our chat, Nan and I did a fun exercise to rank what PM skills still matter in the AI era and he also demoed Linear’s new AI agents that can write code, triage tickets, and analyze customer feedback.
Nan and I talked about:
(00:00) Why Linear is a billion dollar company with just 2 PMs
(03:22) Ranking PM skills: What survives vs gets disrupted by AI
(11:10) Why strategy is actually being disrupted by AI
(21:51) The new skills PMs must build: context and agent management
(26:57) Craft vs speed: Linear's 10% rule for shipping fast
(38:09) Live demo: Using MCP to analyze customer feedback instantly
(53:04) Delegating entire backlogs to AI agents with one command
(57:12) Linear's future as an operating system for human-AI collaboration
Get the takeaways: https://creatoreconomy.so/p/ranking-every-pm-skill-what-still-matters-gets-disrupted-by-ai-nan-yu-linear
Where to find Nan:
X: https://x.com/thenanyu
Website: https://linear.app/
📌 Subscribe to this channel – more interviews coming soon!
Why Linear is a billion dollar company with just 2 PMs
So like everything you write, the main audience is not really people anymore because even the people that are going to read your stuff, they're reading it using an AI. You can literally just go like command a select all and then delegate it to codegen. You can just spawn off a hundred of these simultaneously if that's what you want to do. You know, a lot of sort of what we call strategic thinking and prioritization and things like that. It's like we have to like draw a bunch of 2 by twos and and really figure out how to logic our way through it. Like that is something that they are excellent at. Say what you want about AI. They're not really quite, you know, good at the emotional stuff like yet. — You and I made a slide before the interview. The point is to uh list all the skills that are still relevant, skills that are being disrupted and maybe already disrupted and new skills that PM should build. Why don't we just talk about them one by one? Okay, welcome everyone. My guest today is Dan, head of product at Linear. You know, Linear only has two PMs, but it's scaled to over 15,000 companies, including OpenAI, Ramp, and Mercury, and other great companies. And today, I'm going to ask Non to do a fun exercise with me to rank what PM skills will still matter in the AI era. And I will also demo his favorite AI workflows and linear's new AI agents. So, welcome. — Yeah, great to be here. — So, let's start with this question. Um, why do you only have two PMs and uh do you think we'll see like far fewer PM openings just in the industry? — I think the nature of the job is going to change quite a lot. Um, like we we have very few PMs for a variety of reasons. Um, some of which are, you know, kind of luck, right? like we have very skilled uh engineers and designers who can do a lot of you know what's traditionally been kind of done by PM uh PMs in terms of like discovery and then sort of thinking through business cases and stuff like that but uh we also wanted to hire very slowly on purpose. I think a lot of times uh when you have like a high growth company people hire really quickly um on the PM side because there's just too much stuff uh to keep track of, right? It's not necessarily even to build. It's like specifically to like there's you just need more hands to maintain things like oh there's too many PRDs to maintain. There's too many updates to write like these very kind of mechanical uh types of needs that always ends up falling into the laps of PMS because we're just like the you know the kind of glue right that holds things together. And uh we didn't want to hire people specifically because of that need, right? Because like that's something where even you know like three years ago we kind of saw that type of work being largely overtaken by AI — and we didn't want to be like stuck in a position where like hey we had a bunch of people to do this stuff but now like this stuff is kind of free. Um and then we'd have to find like new things for them to do. We'd rather, you know, kind of hire people for things that we think that, you know, they would be able to kind of own and and kind of grow with for uh for a long period of time. — Got it. So, let let's get like uh very practical here like let's pull up that slide that uh we made. — Yeah, sure. So I think you and I made a slide before the interview and the point is to uh list all the skills that are still re relevant skills that are being disrupted and maybe already disrupted and new skills that PM should build and why don't we just talk about them one by
Ranking PM skills: What survives vs gets disrupted by AI
one right so um let's start like product taste like what does product taste actually mean like — what is product taste yeah — I know it's weird right because if you start describing it then you're going to like go into all these detail and try to you figure all the like minutiae of it. And the real kind of meaning behind it is that like do you know can you like get a feeling for how like good or bad uh specific products or interactions um just feel right? And does the feeling that you receive from it, does that map to, you know, a large enough audience where you can like either, you know, be representative of them or at least like kind of run a simulation in your head about like, okay, here's how this person would feel in this situation. And a lot of it is very uh it's very kind of like lizard brain. There's not a lot of reasoning around it. It's just like it's you're just kind of uh trying to understand like what kinds of emotions that um different you know like types of interactions and uh and features kind of uh conjure up within you. — Is it kind of like process and to me like pro sense is like you just have to understand the customers. You have to like see enough products that you have a taste of what is good or bad like that kind of stuff. — Yeah. Like and a lot of times you'll have the feeling before you know why, right? Like you you'll see a thing you're like oh like they're going to hate that or they're going to love that, right? like you're going to have that feeling and then you have to kind of work backwards from that like and figure out like okay yeah what is the actual reasoning right that I can explain to somebody about why they uh you know why they would have such a strong uh emotional reaction — okay got it and number two is branding so is it like personal branding or what do you mean by brand — I think it's both right it's certainly like product branding you know like just even things like choosing the name of the thing in a way where like it communicates all of the information in a really compact package you know, it's just like people will have an impression of your product from the very smallest piece of information. And uh and you know, so part of your job is to figure out like how do you leverage those kinds of small touch points in order to like get them to have you know, to be primed, right, to really receive um you know, what your product's message is or what you're trying to do. But also personal brand, right? Like we see all these companies now where you know somebody's uh you know almost like influencer kind of profile it leads all of the actual product market fit right you look at like cluey you look at all these kind especially on the AI space right like there's so much kind of like brand building you know whether it's in a a commercial or personal context before you even have a product and I think that a lot of that is important because uh it's you know the really low hanging stuff has been has pick. So, you kind of you want to almost kind of work backwards again, right? Like kind of build the emotional resonance before you even, you know, have a value offering. — Yeah. It's interesting you mentioned Clue because I'm not sure they actually have a viable product, but like their branding and their marketing is great. — Yeah. I don't I mean I don't know a single person that's used the product, but like I but everyone I know knows about the product. — Knows about it. Yeah. Okay. And then uh the third one is ownership and risk appetite. So, is that like just having agency or Yeah. Yeah, I think um you know these two things I I think are really paired because you know people talk about ownership like what does ownership mean? Oh that means like you're responsible for the outcomes — and being is only half of the equation, right? The other half is like you're going to try to get the best outcome possible, — right? And there's some risk involved in that because you can't get the best outcome possible without incurring risk, right? If you go for the safest outcome, you might not completely fall flat, but it also means that your upside is really capped. So, you kind of have to have the right amount of risk appetite and also own the result, which means that sometimes you're going to have failures, right? You're going to have to, you know, get knocked down and then try to maneuver your way back out, which is like super annoying, right? Especially in a corporate context where, you know, you have political capital or whatever it happens to be, right? like that that's you know that's the game and you have to really be ready and willing to play that game. — Yeah. You got to like manage expectations and uh Yeah. I mean I I feel like um like one skill that's relevant is maybe not a skill is like you just have to give a about the product, man. Like you just have to — like do whatever it takes to make it good. — Yeah. You you have to care and you have to, you know, be ready to just like be for people to look to you for whatever the result ends up being, right? like, you know, when we build a feature before we launch, it's like I know that all of the credit and the hard work, you know, happened on the engineering team and the design team who like built all this stuff, but if everyone hates it, like I'm the one they're going to be yelling at. So, like I just have to understand that that's going to happen. — And how about like number four is pretty interesting because like stakeholder management is kind of like at big company what a PM does all the time, right? Like it's like 80%. — But so you think that's still very important, right? Because people have to work with each other still that — yeah you you know and like especially in larger organizations there's like you just have to get you know get enough budget and get enough runway to like let your ideas you know actually uh happen right you have to like you know manage resources and all that stuff and it kind of goes back to sort of risk appetite which also means that you have to know how much — sort of like leeway you have like how much can I piss these people off but if the result is good then everyone's going to like smile and laugh and have a beer at the end, right? Like that's that you know, you kind of have to be able to make that calculation and and go for it. — And I think that's actually uh you know kind of important even as a smaller company, right? Like uh like even I live here, you need to know you kind of want to learn what how Kyrie thinks and like you want to you know make it easy to collaborate with other folks. — Oh yeah, for sure. Right. we're not going to agree on 100% of things and there's going to be some stuff where like look we're going to take a bit of a risk here and you're going to have to come along with me a little bit and sometimes you know if one of the founders of kiotos have a you know have a very strong point of view it's like okay I let's try your way right like you know even if I think that my way has a higher chance of success it's not like your idea is going to be zero so like we can just do things in the opposite order as well — so I guess like four directly ties to five which is uh EQ or emotional intelligence like I guess it's both with customers and internally. Is that what you're thinking? — Yeah. I mean like I think all of these things really tie to emotional intelligence, right? Like the branding stuff it's like you're that's you're basically getting all of the emotional, you know, kind of uh framing and and benefit before any of the actual, you know, kind of physical value, right? And product taste is the same thing and stakeholder management a big part of that. — Okay. I don't know. It's kind of hard. I don't know how to build emotional intelligence. It's hard to build, man. — Yeah, it it is. I I think, you know, the only successful way that I found to sort of practice it, right, is to really, you know, try like make a concerted effort to say like if I was doing this job that my user is doing I'm in B2B, right? So it's like always like if I was in their role and their position with their pressures and you know and stakeholders and stuff like that how what would be motivating me right what are my motivations what are my fears and like to do that kind of analysis and then you can start really kind of building that and start intuiting it — okay so it's kind of like empathy and putting yourself into customer stakeholders shoes like that kind of thing — yeah and you can be kind of technical about it you can literally just like do the economist thing and say like here's the actual incentives that are in front of me and if I was acting according to these what would I do? — Got it. Okay, that makes sense. Okay, let's talk about the second category which is probably more fun to talk about. So, the first one is kind of like a hot take, right? Like the strategy and priorization you're saying is being disrupted. — Oh yeah
— I I think you know if you ask people right like this is the trope, right? It's like oh AI is going to disrupt everyone's job except for mine, right? Like that's that's everyone's kind of point of view. And uh or the the sort of you know slightly better but still kind of motivated uh sort of thing they'll say is like it'll disrupt all the parts of my job that are not important, right? The more important parts of the job, those are the parts that I can't touch. And like there's no physics about why that's true. you know, like the stuff that we said that are going to be kind of uh continue to be relevant are very like emotionoriented and like, you know, like what say what you want about AI. They're not really quite you know, good at the emotional stuff like yet. — Um but at reasoning and at like thinking through things in a uh you know in a kind of like organized fashion and pulling in all the evidence and considering a really broad scope like they're very good at that. And you know, a lot of the sort of what we call strategic thinking and prioritization and things like that. It's like we have to like, you know, draw a bunch of 2 by twos and and really figure out uh how to logic our way through it. Like that is something that they are excellent at. Um which is great, right? Because we're still responsible for that and you know, but it's going to right now it's a tool that really helps us, but it's not going to be a thing that necessarily differentiates uh one person from another. — Yeah. Like I always make my strategy now by just like making a like a AI project and then up like asking to do deep research and like you know uploading a bunch of documents and then I talk to about the strategy. I'm like hey does this make sense? Does that make sense? And then why not — you know. Yeah. It's a great thought partner there. It'll be rigorous. It'll be honest you know unless there's a weird model you know that you're working with right now. Like most of the time it'll be uh the you know it it'll do the mental process that you wish you could do right really fast and all the time. And so it's just like if everyone has access to that then that's not going to be the thing that makes one PM like a lot better than another. — Yeah. Like like the trope that you like go off to a room and think hard by yourself like doesn't work anymore. Yeah. It doesn't work. Um okay. So the next one is data analysis and synthesis. So this is like getting AI to write SQL queries and like you know find the insights, right? — Yeah. And I think like with this kind of stuff it's like there's a lot of sort of boilerplate and you know kind of like this the same logic no matter what business you're at kind of applied to the same problems and anything that has that shape right means that there's like a ton of training data out there. It means that the actions are pretty reproducible in multiple places, right? So like these things are, you know, are invariably going to become disrupted. You know, the speed at which some of these things, you know, are affected is not going to be equal, right? Like, you know, we talk about AI having sort of like a ragged edge, right? like it's not going to be equally good at everything simultaneously, but everything that is uh you know is very sort of like logical and sort of step by step and reasoning oriented it it's going to end up touching sooner or later. — Yeah. It's not very good at like simple math yet, but like for like writing SQL queries, if you give the table names and stuff, it's it's great. — Yeah. It's kind of weird. Yeah. — And that's what it is right now, right? It's like a it's like a thought partner, right? In that way. Yeah. And you know and the way you know we talk about being disrupted and what that means is like you know like if you rewind the clock a little bit right you have two PMs like one of them like you know quote knows SQL which means that they can like get the database to say anything to them at all right and then another one is like I don't know it's just like I have to have an analyst otherwise there's no way I can even get any kind of result and like that difference that sort of like very basic differences is evaporated right so now it's like okay you have to kind of move higher level with like interpretation and those kinds of things and you know and the AI is not necessarily the best at yet, but like it's clearly on that path. — That's great, man. Cuz I I don't want to learn SQL. Like it's not a fun thing to learn. Yeah, — I know. I I my joke is like I never learned regular expressions cuz like I just I don't know. I just never got around to it. It was always going to be, oh, I'll learn it next year or something like that. And then uh now we're at a point where I never need to learn red ever again. Yeah. — Yeah. Okay. And the next one is market research. So, so I feel like, you know, I get these emails from like uh Berkeley kids like they're like there's like a Berkeley, okay, if you're listening to this from Berkeley, like don't take it the wrong way, but there's like this Berkeley consulting club or like some college consulting club and they're like, "Hey, hey, Peter, I can do market research for you for this industry, you know, like for like three months. " — Yeah. — I'm like, dude, like, let me just do let me just use deep research to do it. Like, why why do I need you? So, yeah. So, I feel like a lot of this is gone, you know? It's like — Yeah. I mean the re research is interesting, right? Like the the cases that you just like talked about where you're kind of doing, you know, I know it's a deep re it's like kind of shallow, right? like a first pass type of research where you're going in kind of naive and you're you're learning all the lowhanging stuff like that part has — been completely overtaken. — And that's usually what these kinds of like consulting clubs and stuff like that like are offering. um the research right now, right? That's kind of gonna, you know, that exists is it's like if you you know are an analyst and you have a couple of you know things that you follow over a very long period of time and you like you know are having like dinners with like the executives and really you know understanding their you know their their personal emotions and all that kind of stuff like that that's still alive right like that kind of research great but that's a very special like kind of like hightouch uh type of research. Yeah, that's like it's like between primary and secondary research, right? Like you're actually getting like new information. Yeah. You know. Yeah. Um and then project management is I guess that's what linear is also trying to serve, right? Like what do you mean project management? Like zero tickets and stuff or what are you thinking? Yeah. — Yeah. I mean project management it's a really wide space, right? and this it's not being sort of evenly you know kind of attacked but you know a lot of it is keeping track of details right like if you think about like hey like name the best PM you ever worked with right sorry best pro project manager you ever worked with and uh tell me some good qualities about them and like invariably someone will say oh they're very detail oriented they notice every little thing they read every single document they like have all the details in their head all loaded in you know they always know the current state of affairs It's like that sounds like a computer to me, right? Like that like what you're describing is is a you know like you know you might even say like oh yeah they were like a machine, right? Like that's like a metaphor you would use. Um and you know computers used to be bad at this because they couldn't interpret a lot of these uh things, right? If someone wrote like in a document, this is an emergency, everyone, you know, pull your hair out. Like the computer wouldn't understand the severity of that. — Whereas now they it kind of can, you know. So like so the all of the um you know the the little bit of intelligence that we have right now what it's really unlocked right is a whole bunch of uh of diligence right because the computers are super duper diligent right they're super detail- oriented they're always online as long as you plug them into the wall. Um so but they just weren't intelligent at all before. Now that they've have a little bit of intelligence it's unlocked all of that diligence that they already come with. Um but I feel like there is like a obsession about detail part of product management like uh like you know pointing out little details little inaccuracies in the product. I guess you can use AI for that too but like you also just need to care enough to about the stuff. So maybe that's a product taste or — Yeah. thing, right? Like AI can probably do things like, you know, say like, oh, there was a visual regression here or whatever it is, but they can't say like — really like, hey, I think this design is in congruent or this like the vertical rhythm of this page doesn't make sense or like this button is ugly or whatever it is, right? Those are still, you know, kind of up to opinion. — Okay, that's great. And um and you tweeted like uh this is kind of random, but you tweeted that um we finally stopped talking about empower PMs. Like remember Yeah. Like dude, I I was never into that whole thing, man. Like I don't even know what that means. — Yeah. It's like we're trying to ship product at the end, right? Like we're trying to make a make an outcome happen, you know, make customers satisfied and and everything. It's just like if that's the result, then how you get there really doesn't really have that big of an effect. — Yeah. It's not like black back black back black back black and white you know so — it's okay and then I I separated summarizing research and documentation because you made a point that this stuff is already disrupted right like I guess summarizing it makes sense like you just like get to summarize — you know — yeah I I think summarizing and documentation are like almost the same thing right you're you have a you have some kind of input source with research it's like okay here's all the sources that I found on the internet or you know uh in some archive uh with documentation It's like you have a code base, you have, you know, a whole bunch of opinions that people have and, you know, decisions that were made and things like that. You're just taking all that and you're collating it into one spot, — right, into one research report — into one document. — Um, and then the next part, right, which is the hard part is something changed and now you have to update it, — right? So someone changed their mind about what the direction we're going. So you got to update documentation or there's a new development, so the research is out of date. you have to like you know amend it with like the new uh the you know the new findings and that part is once again not that hard to do. You need a little bit of intelligence to do it, but you need a whole lot of diligence, right? I'm just going to stare at all of the sources and the moment one of them changes, I'm going to act really fast, right? And like and you know when we talked about like why doesn't linear hire a bunch of PMs. It's like a lot of times people hire PMs because they want that dynamic. They want a bunch of eyeballs on all this stuff, right, that you have to like think about. And the moment something changes, they're like, "Oh, someone's got to be really diligent and just update all of the documentation immediately. " And uh and like we're like look I'm not going to hire someone because of that because I think computers are going to do that just for you 100% of the time way better than any human could. And I'm okay in the interim just not having that benefit and because I want to actually force us to like you know at the earliest possible moment like the to get our computers to do it. — That makes sense. Yeah. I mean even right now why when I get a bunch of stakeholder feedback you know like I I just talk to AI like hey you know here's my document. Can you update this document based on stakeholder feedback and then maybe bold your edits so that I can see what you change like just do a sanity check? — Yeah. — You know, like I'm not gonna do it all by myself. — Yeah. — Right. Exactly. — Um Okay. Um and then new skills to build. I think these are pretty self-explanatory. I guess I put
The new skills PMs must build: context and agent management
context engineering here because it is more than just prompt engineering. Like you got to think about what to load into the AI's context and how to man manage it well, right? Yeah. To not get overwhelmed. Um, and then e eval. I'm sure like you know there's whole episodes on that like making sure the AI's responses are overall you know make sense or accurate but like AI workflow design and AI Asian man management like this is this is like kind of directly what linear is also working on right? — Yeah. I mean workflow design and uh context engineering are very closely related right like context engineering is sort of understanding like okay like what information you know would this model need to have better performance for the job I'm hiring it for right and uh and then you know workflow design is okay now how do I make sure that you know it's always able to receive the right uh type of context at the right time so those two things like really you know kind of um you know kind build on each other, right? And part of workflow design includes like, well, how do I want, you know, how do I want the agent to behave? Like if it doesn't have the right context, how does it know, right, that it needs to go gather more? And then like what tools do I give it to go uh, you know, to go actually like help itself, right, gather the context that it needs. So I think you know context engineering is really you know about you know how do I like what kind of context does it need and then the workflow design is like okay now what is the like dynamic process by which it keeps getting the right context. — Okay. So like what kind of tools can you use or like what kind of steps should it take to get the right context for? — Yeah. Exactly. And some of that stuff you we still have to pro we you know like actually make specific prompts right it's like hey if you if you're in this situation then like you know go here like look here first and all that kind of stuff. So there's a there's a bit of you know kind of instruction uh that the models still need but you know it's getting sort of like better and better as uh as time progresses. — Okay. Um and agent ma management is kind of like I mean we all have this uh fantasy or maybe it's becoming reality of like you know we have like five AI agents that just doesn't work for us and we're just we just kind of ma manage them. — Yeah. Is that reality or is that still kind of far off a little bit? It's, you know, I I think these things are so unevenly distributed. It's reality in some corners sometimes, — right? Where there are some cases where you can literally just, you know, say a couple of words and dispatch an agent and it'll just handle it for you and then you're done. Uh I think sometimes it's the number of words you need to say are actually a lot greater than you expect and you know it's very easy to underspecify. Um, and I think that that's where that's one aspect of agent management is to know like how detailed do you have to be, right? Like do you need to really micromanage these agents or are you perfectly fine just kind of like giving it a general direction and it's going to do something that's good enough for you to accept the performance and uh and then once you once you're in that mode then it also becomes like okay how many of these things do you like juggle at the same time and you know like understanding something about how asynchronous you have to be um you know given the sort of current state of technology and stuff but I think a lot of agent management is really about like how you know under or over specified uh do you actually need to uh make your instructions? — Yeah, that's a good point. So, so like my take is that the PRD isn't dead. It's just you're just getting AI to write it and then like it's kind of like merging a little bit with the prompt. — Yeah. — Like you know when I want to clock code to build something like I ask you to make like a pretty detailed PRD on like the tech stack and everything, right? Like the milestones. — I'm just not writing it by myself, you know? — Yeah. I I think that that's um that's really true, right? You know, one of the things I I try to tell people is like everything you write is it's the main audience is not really people anymore because even the people that are going to read your stuff, they're reading it using an AI, right? So like if I if I write an issue, — you know, like in linear today, if I write an issue, I the first thing that reads that issue is our AI, right? and it uses that issue and it goes in and it finds all sorts of like related issues and you know tries to figure out who should work on it and stuff like that like so I need to know I sort of know in the back of my head that like when I write this issue I need to give the AI enough information to do its job — which also happens to be really helpful for people right so when someone else reads it they go like okay cool it has all the right information right repro steps or whatever um but before it was kind of a squishy thing right you're like you know if I don't put enough detail then someone's going to yell at me and be upset but now if I don't enough detail then all the automations won't work. So you know like the motivation is actually quite different. — Yeah, that makes sense. Yeah. Yeah, that that that's why I keep all my documents to like a half a p pager of key points and then maybe a bunch of appendices because you know if my main doc is more than half a page, someone would just use AI to sum summarize it. — Yeah, you know it's like Yeah. Um okay, great. So this is pretty tactical. So hopefully people can use this table to figure out what skills
Craft vs speed: Linear's 10% rule for shipping fast
they need to work on. All right, man. So, let's talk about Linear a little more. So, you know, this podcast is actually called Behind the Craft. This is why I I love talking to you and, you know, working with Liner. Uh, but I think people have a misconception about craft. Like, people think that craft is kind of like what Apple does where like you just kind of keep working on something for like a year and then you have a big bang and like, oh, this is like super polished, super beautiful stuff. What is your point of view on that like between craft and speed? Yeah, I think it's software, you know, like Apple's a hardware company, right? So all of their working processes and things like that originate in sort of like a hardware, you know, design and development space like, you know, retooling, you know, hardware like production lines, it's like enormously expensive. So it just makes a lot of sense, right? If you want high quality in that universe, you have to really uh make sure that, you know, things are super polished before you go to mass production. software you can deploy and undeploy and continuously integrate and all this kind of stuff like with at basically no cost. So you know the idea for um you know even though the end goals are the same right the ideas around how to like get to that high craft high quality uh you know final state are quite different and then for us — it's not about like whittling on something until it's perfect. It's about um just getting as many feedback loops as possible because every iteration that you go through makes the thing better and it tells you, you know, it tells you about uh how your product fits into the world way more than you could by just thinking very hard about it. So, you know, for us like the way to get to really high quality outcome is to get to a you know, a working version as absolutely fast as possible. So like the rule of thumb is like you know about 10% of the amount of like time budget that you have like by the time 10% has elapsed you should have something that gets the job done — and that you can start you know applying taste to you can start feeling like does this feel good like what would make it feel good right in in my workflow is it close you know are there moments of uh you know where it feels great and it's just like uneven or is it just like consistently feels bad right like there's all of those types of judgment calls that you want to be able to make, but you can't do that until you're able to sort of use the product in a real situation. — Yeah. And and then uh how do you get feedback? You have like a beta groups and you have people like both internally I guess and with real customers, right, to get feedback. — Yeah. We have a you know what we call like a Russian doll uh release process where we have like sort of nested audiences that get bigger and bigger as uh as time progresses. So the first audience is just the developers right who are working on it and we use our own product so they're able to kind of like play with it and things. Um, and then the next audience is just, you know, people who work at linear. So, the internal audience and uh, you know, and then from there it'll escalate to like a select beta, like a public beta, and then uh, you know, pre-release and then production, right? So, like there's like a whole bunch of different shells that this thing goes through um, as it reaches wider audience. Like we had a big release yesterday and I was like, "Okay, anything you know we want to take care of before we go to G? " And then the engineers were like, "I don't know. It's been in like all these different beta levels. Like I have absolutely no fear of any anything falling over or anything like that because it's just been battle tested already. — Yeah, this is actually like a no-brainer thing, man. Like I don't know why more companies don't do this just like um roll out to more people progressively, right? And make the product better. It's like why do you want to risk it with like a big launch? It's it's weird. — Yeah. I I don't know either. I think it's pretty, you know, I think it's pretty obvious. Everywhere I've gone, I've installed this kind of process and I don't think anyone's really felt bad about it. So I think it's there's so much of this stuff is path dependent, right? Someone just has to decide that this is something worth doing. — But how do you like uh in that 10% of the initial version, right? Let me push a little bit like how do I like if I want to shift something to to Kyrie or something like you know he has like very high standards, right? So like how do I make it actually not look like crap or like what do you focus on the 10% is like the core experience or — Yeah, the core experience, right? like you might not cover all the edge cases, you know, you kind of just want to make sure that the golden path does the thing you expect it to do. — Got it. — And then if someone is using it enough to like run into edges, then that's good. That's a good thing, right? Oh, you ran into an edge case. Great. That means you were using it way past the golden path. — Yeah. Got it. And yeah, that is especially true for AI products because like you know, there's like so many edge cases. — Yeah. — Yeah. Um Okay. And I think um I think another thing about let's talk about AI products real quick. So I feel like with AI products, you got to write these e evals, right? To like you know to make sure it scores high on certain attributes and that is also kind of like a balance between speed and craft because like if you're just starting to write a prompt like you probably shouldn't write 200 test cases because like your prompt will change dramatically as you learn you know so you kind of have to balance that too like I'm not sure if you've seen that. Yeah, I mean for us a lot of our eval development like we start with like no evals, right? And then as we kind of you know start deploying to bigger and bigger groups of people, people will give us feedback about like hey I said this and I expected this outcome but it did this other weird thing instead. And then we can like analyze okay what went wrong here like what might we change about the prompt to sort of make sure that we cover this kind of case and then is there an eval that we can you know build against this feedback that we just got. So it's a very active and iterative development process. — Okay. Yeah. Yeah, that makes a lot more sense than trying to like do like 200 golden evals like you know in the very beginning beginning. Yeah. Yeah, that makes a lot more sense. — Okay. So, what happens like your building is like in concentric circles and then the people are like hey you know like this could be a lot better with this feature or that feature like how do you keep the product simple man? Like because like uh especially for like B2B, right? People want all kinds of things. Like how do you make those trade-offs? — Yeah. Yeah, I I don't think there's one way to make these trade-offs. And I think where a lot of like uh you know kind of PMs maybe fall into like a trap is that they think that there's some philosophy that you can have that just makes every decision for you that like oh I don't have to think anymore because like we have this policy and the policy could be something like we the customer is always right so we'll always say yes to every request or like oh no no we only want to do things one way and we're going to remain like really rigid and and not do anything uh you know people ask us for so I think a lot of it is like about picking and choosing you know which where you're going to be really opinion at and and specific, right? Where you're going to be more uh purposeful, right, with like really sane defaults, but also, you know, kind of understand that there's some edges that you have to cover. Uh and then where you're going to be just fairly unpopinionated and you know, make it a very configuration first uh kind of experience. Um and yeah, and I think in this space in particular, you know, a lot the trap is everyone ends up being like fully configurable, you know, configuration first for every decision. uh which is not what you want because then you're just like releasing the product design uh choices to administrators effectively and then everyone has a really weird experience because like their admin may not be the best product designer in the world right so it's it becomes a very strange situation for uh for users um so you know for us like when we think about it like of all the decisions that a company makes right like the highest you know the highest order decision is like what industry am I going to be in a am I a steel mill am I a software company am I a restaurant? Like look, we have no opinion about that. Like whatever business, you know, you want to play in like that's 100,000% like you know your decision, right? — And the sort of the high level like strategic decisions like are we going to run OKRs? like more kind of strategic initiatives or whatever it is like we're not going to have a strong opinion about that either. But as it gets more and more specific into individual actions that software developers are taking or are sort of forced to take in in their day-to-day like minute-to-minute kind of uh kind of workflow. That's where we're going to put a lot more guidance and a lot more kind of like defaults into the product, right? So it kind of gets more focused as you get into that layer. So that that's the general way, right, that we kind of are able to expand to larger and larger companies. um retain the flexibility that they want but also don't like make the experience totally weird for IC's, right? Because IC's want a predictable experience that helps them get their job done day-to-day because like their job is to make designs, write code, talk to customers, whatever. Their job is not to like push buttons inside of linear. So that's how we uh that's how we like try to make those kinds of trade-offs. So that is the IC kind of like the ideal customer profile like because like sometimes their needs are different from like a manager or like you know admin right it's like yeah I I think you know I ideal you know for us a customer is a company right it's an organization so in terms of like users right they are certainly our most uh ubiquitous user right if you just count the number of individual seats that exist inside of linear the vast majority of them are going to be IC's so that's who we have to make sure that the experience is like kind of always, you know, stable and consistent and not, you know, and not annoying, right? All that kind of stuff, right? Like that that's the number one priority. Um, just literally because they are our most common user. And then, you know, as you kind of move up, you know, in the sort of layers of management, we have to, you know, give them flexibility without negatively impacting the IC experience. — Got it. That's a really good way to look at things. Yeah, that makes a lot of sense. And also another tip that I uh have found is that like when someone's asking for a feature like you can just ask him like hey what problem are you trying to solve and then maybe there's a better way to solve the problem than what they're asking for. — Yeah. Yeah. That happens all the time. Um you know I think one of the things for anyone out there who's like building for a uh a sort of like PM you know or project manager kind of audience. One of the things that you're gonna go into this, you know, this sector uh assuming is that PMs are going to come at you with like really strong opinions, right? They're going to be like, "Oh yeah, I there's my process and my way of doing things. " And uh that's not actually true, right? Like they have to form those strong opinions because they have to get everyone to kind of go along for the ride so that they when they when they explain to people, it's going to sound like they've just like really thought it through and everything's like super perfect. And the reality is like it's they're kind of figuring it out as they go along. So if you come and you provide them with like a framework that they can like, you know, lean on and and use to get their job done like they'll be very happy about it. — Yeah. PMs always have strong opinions that Yeah. But you got to challenge them. — Yeah. Um Okay. So um so that's great. So, so let let's kind of wrap up by actually uh hopefully having you demo some of uh the new AI agent or AI features in L engineer or how you use it with other AI products. Yeah. So, maybe you can demo like some of your favorites. — Yeah. I think uh I have a few uh that I have ready for you here. So, let me share my screen first. I'll start inside of Claude. And um you know one of the things that what I'm going to show you is you know real tools that I use on a basically a daily basis right these are things that happen all the time um that you know and
Live demo: Using MCP to analyze customer feedback instantly
that work you know at production grade quality today right so this is not like pie in the sky like oh it would be cool with this is like real stuff so uh the first thing I'll show you is um you know our our MCP uh our MCP server basically right so you can use um any most of your AI tools available today like cursor or in this case claude to access uh linear through the MCP. Uh which means that you can do a lot of the sort of you know normal kind of analysis uh or coding kind of uh kind of workflows in these tools without having to like do a whole bunch of weird setup. Right? So you know you just do a bit of OOTH to connect your MCP and now I'm going to do a quick analysis. Right? So uh we have this project that we're kind of developing right now called release management. Um and I want to and there's a bunch of like customer requests and feature requests and stuff like that attached to it. I just want to do an analysis like what have people been saying? Get me started. Right? If I'm a new PM on this project, I just I don't know like we're doing kind of the initial batch of like what information have we collected already, right? So, I'm just going to ask Cloud to do this. Um and then, you know, Claude is going to start using the MCP tools to to, you know, get all the project details and the issues inside of it and the customer requests and and you know, start thinking about it. This is where you go you get up and you go have a cup of coffee, right? It's going to take a few minutes uh to do this, which is fine, right? Because if I was going to manually read all this stuff, it would take hours easily. — Uh provided that I could even find the stuff, right? So, it's it's doing all the automated lookup and things like that. So, uh so here we go. Um it found the project. — It looked at some issues. It's looking at some other, you know, it's like, hey, I'm I found the main body of issues. I'm going to look for, you know, other issues that are maybe uh that maybe have different parameters. So, uh, it's looking specifically, you know, for some details for these other issues that it found, right? So, that's what's going on here. And now it's going to write, hey, I found 28 customer requests against this. And then here's my analysis of what what's, you know, what people have been asking for around this subject. And all this information is like inside of linear, right? So, if I ever want to kind of link back to it, I can ask it for references and things like that. And this is the first kind of starting point, right? basic analysis that Claude is making. Um, this is the start of a conversation, right? So, at this point, you kind of go back and forth. I might say, you know, what are the top three uh thematic uh areas within this, right? So, I can sort of, okay, what are the three, you know, big rocks that I'm looking for, right? So, it's like release visibility. Like, yes, I I think so. I've been thinking about this for a while. Like I I agree with this right like a big part of release management is just visibility for people outside of development team. Um you know directly you know implementing with uh version control platforms so that you know when the binary goes out it you know linear knows about it and uh and sort of like workflow like you know kind of building the change log and all that kind of stuff right so like this you know very much j with all my user interviews and things like that. Um but also it's you know for somebody who just wants to know like hey what's the state of our research like what have we been you know learning from customers you can just do this is you know anybody who has cloud installed who works at linear can do this — yeah this is awesome yeah even if I had to without the MCP I had to do a bunch of copy and pasting into cloud which is a pain in the ass so yeah this — yeah and like you know when we talk about like workflows that's what this is right it's like yeah you could have done all this I could like dump everything to a CSV and like throw it into here and copy pasting them and it's like no I I issued like two commands like That's kind of where you want to be. — Okay. And uh can the MCP actually make changes to or is it mostly reading information? — Right now it's uh it's reading um you know we're developing the right endpoints as well, but you kind of have to be a little bit careful with those because you don't want agents going uh you know going completely wild and overwriting a bunch of your data. — Yeah. Got it. Okay. Cool. Uh and what's the next use case that you want to show me? Um the next use case is uh something that we actually just released uh yesterday as of this taping. So um let me find it for you here. Um it's cool. Uh this is something that you know we've been using internally for months, right? We just you know did the full public launch yesterday. Um it's called product intelligence. And I'm going to just you know make it a little bit bigger. And in you know when just to give you a you know basic framing right in linear there's a concept called triage. So like whenever someone reports a new bug or requests a new feature or something like that, it goes into this triage state. — And uh and you know, originally we provided this very kind of structured way for you to kind of assign somebody to like monitor it to like make sure that you know they're asking for the right questions and directing to the right people. But so that that's good, right? You have a structured process, but like the problem with that structured process is that you're kind of relying on this person to like have enough institutional knowledge to know what to do with all of these incoming issues. And uh and if they didn't know, they would have to like ask somebody. It would be a whole, you know, kind of research process basically for them, right? They could use the tools to like search for stuff. Um and you know, at some point we're like, okay, let's how much of this can we actually audit? Can we like completely transform this from a tedious kind of research process to a like to a little miniame where you can go like, oh yeah, okay, great. Yeah, yep, yep. And then you like approve some kind of suggestions, right? So for this example here, someone said, "Hey, I want to allow applying uh templates to um you know to existing projects basically and immediately uh prod intelligence found a couple of things, right? It's like hey I think I found the right team, the right assigne, the right project um and then here's like a tag that might make sense for it. " And if you kind of hover over the suggestions like hey why was Machek suggested for this and it says like well he's the lead for the project templates project and uh and he's also been the assigne for several related issues to this. So I think he's the right person. Right? So even if I don't assign machik, I know like if I have a question, I know exactly who to ask, right? Hey, this is this sensible whatever it is like you know there's 100 people in the company. Who do I ask? It's like well clearly I'm asking Machek like this particular question. It also found a duplicate and uh and it said like hey I think this is basically the same. It's not the same title, right? But I think it's basically asking for the same thing as this other issue that already exists. Uh you know and here's the explanation, right? It looks like it directly addresses the same thing. Uh it has multiple customer and like the one that found has a bunch of customer requests on it. So it thinks like oh that's probably the main one. So if you're gonna if you're going to do anything, you should mark this one as a duplicate of that one. Right? So it has like a little bit of opinion about what you what action I should take. Um so I'm like yeah, I think that's correct. So I'm going to hit this check mark. It says mark it as duplicate. Great. And I've taken care of this. Right? So now all of the information that's attached to this new, you know, new customer request, this new report gets merged into that main issue. And then we can kind of move on with our lives. — Oh, that's great. So it both adds like relevant metadata and also finds like other tickets that are similar. Yeah. — Yeah. Because otherwise I have to do this manually and they'll take Yeah. Like trying to prune the backlog cuz it's pain to ask too. — Yeah. I know. It's it's one of the things that we just like, oh that's one of your jobs is like prune the backlog, right? And like why do you why do you have why did you hire a whole bunch of PMs? Well, we got to have somebody to prune the backlog. It's like okay, what if we made that like a hundred times easier? Yeah, this is awesome. And I can like dude I I feel like maybe there's also a way to uh have like a public facing thing where like you can see like the tickets that people working on like you know how like some companies share their public road map with their users that could be interesting for here too. Yeah. — Uh the nice part is when you when you go to you know the thing I just merged it into right if I look at this and you can see all of these customer requests that are attached to it. Um, okay. All of these things are results of other issues being like duplicated into this, right? Because the system knows. So as soon as another request comes in, it goes like I know what this is. This is a duplicate of this other big huge issue over here. And then you just mark it and it kind of accumulates the data. So then when a PM has to come in and they see this big issue with a bunch of customer requests on it. Then they fire up cloud and go like help me analyze the customer request and then figure out what's like being asked for here and all the little nuances and details, right? So like the whole system kind of works together. — So is there a way to like summarize all of the things here? Like uh — uh not not in the app right now, but it's this is one of the uses for the MCP like I could like copy the um the issue ID and ask Claude be like for me. — Okay. That makes sense. Yeah. Because — Cool. — Yeah. The less reading the P do the better. Yeah. — Yeah. That's the idea. Um — so then another case this is another one of these right things that's in triage. Um, and uh, you know, and this one, you know, is this is the same sort of thing, right? It's you know, it's a it's a bunch of suggestions, you know, it kind of suggested this team. Um, and I think what's interesting here is that like this product team like owns the list sorting behavior, right? And I think that that idea like we, you know, we have very few teams, but like a lot of times you're like, I don't even know what team is like does, you know, is meant to like think about this, right? It's like it's not even like forget what person. It's like I don't even know what team to to put this on, right? So, so this is one of these things where like look, at least I can I can maybe just hand it over to them. Put it in their triage queue and then they can figure out what to do with it next, right? So, you don't have to like fully take action on it. You can it's enough for you to like move it forward in the progress. — Yeah. With without this feature, I've seen a lot of people just asking on Slack, hey, which team is responsible for this ticket? And it takes forever to figure out. — Yeah, it takes forever. You do that and then you forget to, you know, no one answers and a week later you're like, oh, wait, no one answered my question and it's still sitting here. You know, you don't have to deal with any of that anymore. Okay. All right. And last but not least, you have something else to show us. — Uh yeah, it's actually in two parts. Um I the the first bit I want to kind of show you is uh is actually down here. Um which is like this, you know, this person asked for a uh you know, this is just another feature request from a customer. Um and then you know and what it asked for is like you know, hey, I want to sync my cycles with calendars, right? And uh and in the the discussion thread um you know we discovered like wait do we already have this right? Like I'm like this? And uh and then you know this uh you know this colleague of mine he's on the support team. He's like yeah I think we do. Like here's you know here's where we talk about it. And then you know maybe he found some documentation or something. Um but I really want to make sure. So, uh, what I'm going to do is I'm going to ask one of the, uh, one of the agents that's living in our workspace, right? And this is not an agent developed by us. This is a third party coding agent that somebody else developed that's using our agent platform. Um, and I'm going to just ask this agent. The agent's name happens to be Charlie. Uh, do we sync cycles to calendars? Right? I'm just going to ask. — Okay. — And, uh, just like it would a person, right? So, the thing you said about you go to Slack and you like app mention somebody like, hey, like what do we do here? Right. So like Charlie's going to like see this, you know, it's going to say, "Hey, I'm going to review this stuff. " And then Charlie's going to look in the codebase, right? Because like we've all been through this where like we're an engineer. We're like, "Hey, is this how the system actually behaves? " And then you have to like ask an engineer and then they like dig through the code like, "Yeah, okay. Yeah, it looks like that's exactly what it's behaving as expected or whatever, right? Like we've all done this. — You don't have to do that anymore, right? You do not have to bother an engineer take time out of their day to like drop what they're doing to go do a little research project for you to just tell you that the codebase works the way that you think it does. Right? You can just ask an agent to do it for you. You know, it's going to take a few minutes, but again, like engineers going to take a couple hours to do this, right? So, again, go have a cup of coffee and then, you know, when you come back, there's going to be a comment on here that tells you whether or not this is how the system works. So, we're going to chill here for a second and wait for that to happen. But — that's great. Yeah. And this is pretty low risk to write because it's just trying to read the code. It's not making any major changes or anything. — Yeah. Exactly. Um, and then I'll sort of while we're waiting for this, I'll sort of show you a different uh scenario. Um, oh, okay. Here's another question I have prepared. Uh, tweaks. Yeah. So, so in this case, I was like, hey, what's the system prompt we're using to do this, right? I'm like the the system behaving kind of weird like what's the actual system prompt? So, I'm going to ask, you know, I'm going to ask that and we'll we'll sort of see what happens. Okay. So, we waited for a while, right? We had a coffee for Charlie to do his work and uh you know and we came back and Charlie says yeah cycles can sync the calendars uh per team uh on a on ICS. So like this is you know confirming what our uh you know what our support team member uh suspected right based on the discussion that you found. I just want to double check. I'm like I've never heard of this feature before. I just want to make sure this is actually happening. And then Charlie's like yeah I found it in the codebase. It's it's a hot code path. We're good. And like normally, you know, I might go to a like an engineer to confirm this and they're gonna have to drop what they're doing. They're gonna think about it and you know, I'll wait for them and I'm gonna owe them a favor, right? Like that's usually how these things go. — I don't have to do that anymore, right? Like this is a question that as long as you know it exists, right? There's the context out there and you can reason through it, the agents can find an answer for you. — This is awesome. Yeah, this is great. — Cool. All right. So the um the last uh sort of thing is uh you know Charlie and other agents that we have on the platform, they're all coding agents, right? So they can actually uh you know write code for you. So I I'll demonstrate that. But I'm going to switch workspaces to uh to my personal workspace here um because I don't want to write any code in the uh in the production codebase on accident um while I'm doing this. — And uh all right, so to sort of uh you know kind of set the stage here, right? Uh I have a personal website and uh that personal website's a it's it's based off a static site generator. I I'll sort of show you what it looks like, right? This is my personal site. Here it is. — Um this is like the development version of it. And uh you know what you can see that it's in dark mode, right? Like that. And what it does is it's because my operating system is in dark mode. It follows your uh you know your OS. And what I want to do, let's say, is I want to like add like a little control for users to be like, "Hey, if you want to just switch it to the light mode, you know, version of it. " Like, go ahead and do that, right? It's no problem. Um, but I don't like I don't want to write that feature, right? Like you I got to design a button and figure out how to, you know, do the CSS and local storage and everything. Okay, whatever. Uh, so I'm just going to write the issue like I normally would, like, hey, right now it's controlled by media query. um you know but I want to give people a way to toggle this on and off uh without you know and have it saved in local storage and then they can just do it there right so instead of being uh you know media query I want you to have control and I also have some opinions about styling right I would like use a simple text link uh as a control no buttons or emojis or anything no borders uh you know just plain text right like that's okay I'm very specific about the look that I want right because I don't have any icons on my website. Um so uh you know if I wanted to do this I could write the code or I could hire an engineer or I could just assign it to
Delegating entire backlogs to AI agents with one command
one of the coding agents that lives on the platform right so linear has an open platform um you know agents uh can be deployed into linear right any kind of coding agent out there that's uh that that's built support for uh for linear can do so. Um so we have this one installed in this co in this uh workspace called codeg genen right it's a it's a really great company that we work with um and uh what I can do is I can just assign this issue to this uh this agent right so when I do that um you'll see that you'll see something interesting happens here right the all I have a responsibility for this ticket and codegen is doing the work right so like this is something that we directly express inside of linear because like one of the things that you really want to make sure when you're deploying agents into your you know production environment is that some human being has responsibility for the outcome, — right? This is like really important, right? You can't just be like, "Oh, the robot did it and it shipped a bug or it, you know, dropped the table or whatever it is, right? You can't, you know, any kind of — or if it did something well like, hey, like this is a great feature. Who built this? " It's like, "Well, the robot built it. " It's like, "No, no, no. " Yeah. But someone like, you know, was working with the the robot in kind of a high bandwidth way to uh to build the feature, right? So, um, you know, so I delegated it to Cogen. And so cogen's working. You can see this is like the connected uh you know kind of thread here. If I want to you know understand the sort of chain of thought and what's going on I can open up this panel and sort of see okay here's what codegen is doing you know like the sort of normal coding agent kind of thing where it's you know thinking about what it should do first making a plan starting with the CSS right and kind of going from there. Um you know like any other coding agent it's going to take a little while. So again, you know, go have a coffee and uh and you know, you'll come back to a to a p pull request hopefully. — Okay. The interesting part about this, right, is that you can choose to babysit it like, you know, I'm doing right now or uh you know, if you're really efficient, right, you can you know, you have a backlog. Like every engineer's got like a whole bunch of tickets assigned to them and some of them are, you know, kind of low hanging and can probably be achievable in this way. Um, you can literally just go like command a select all and then delegate it to cogen or to any other agent that you're using. Right? This is a new pattern that you can just do. You can just spawn off a hundred of these simultaneously if that's what you want to do. And then, you know, they're all going to eventually come back. Uh, my coach just has here says like, hey, here's the PR, right? And then, you know, I've done, you know, here's a here's a small sort of uh, you know, kind of summary of what I did, right? Yeah. Um, and if I want to, I can go to that PR. I can check out I'm going to go to the PR right now. Um, here it is. I'm going to copy the branch. I'm going to check it out and see what it did. Right. So, let's take a look. This is some live stuff here. So, let's see what happened. Okay, so I've pulled it down and let's see what it did. Okay, great. It added this uh this little link up here and I'm going to click it and it's going to switch it to light mode. Right. So, it did, you know, exactly what I said. I said, "Make give me a very small link in the corner. Make it only text. Don't make it a button with an icon, you know, and and you know, have it be really simple, right? " And it kind of, you know, did those things. — Yeah, that's great. And now you can accept the PR and then Yeah. — Yeah. I can accept the PR. I can merge it. I can do whatever. I can go and manually tweak it, right? I can even go back into the um into the issue, right? And, you know, and ask uh ask coaching to make modifications if I want to, right? So, it's like it's a it's a thing. It's a very natural kind of interaction that you can do. — Okay, this is awesome. Thank thanks for showing all three use cases. Yeah, so um yeah, it's it's great that it can both read the code and also make changes to the code and even you know you can get multiple agents to submit multiple PRs at the same time. You probably should double check his work even more but you know you can do that. — Yeah. — So let let's wrap up this quick question man like um so you know uh linear started as a simple kind of ticket tracking tool right and what is the future here? Is the future here to build this kind of like operating system where agents and humans can talk to each other and like you know work on a product together. Is that kind of the idea or — Yeah. I mean I think that's what
Linear's future as an operating system for human-AI collaboration
almost like the the very near future looks like. you know, it's a system because like right now like you know, linear's you know, ultimately if you zoom out enough, right, it's a it's a system where you're kind of coordinating work across a lot of different you know, independent kind of acting uh people and um you know and agents are shaped a lot like that. They're different, right? And they're different in that you know in the ways that we talked about where like you know you you can't hold them accountable for outcomes or anything like that. So you have to, you know, continue to involve people in the mix, but they also do act in a lot of the same ways where they make decisions, they can do work, you know, kind of out of band, they can report back on what they did. Um they can do all of this uh these kinds of activities. So in a lot of ways, they're a new kind of user of of linear system. Um you know, again, with sort of like special caveats. — All right. So uh hopefully uh I think I'll start a new linear project with like you know I on board a bunch of agents — for me and design for me and uh yeah life will be good you know. Yeah. Cool. All right. Now well uh where can people find you online and uh you know learn more. — Yeah. Um you know I think I'm most visible on on X or Twitter. Um you know my handle is the nonu t- nyu. um and uh you know, DM me uh you know, kind of take a look at the stuff I'm sharing and then that's the best way to contact me. — Thanks so much, man. It's always great to connect again. — Yeah. Yeah, of course. Peter.