Today, I want to share a new episode with Ben Erez.
In one of the toughest job markets out there, Ben is a former meta PM who has helped 100s of people land great jobs. One of the most unique things that Ben has built is an AI co-pilot that his students say feels like having a "calibrated interviewer that's available 24/7." I got him to show me exactly how to build a simple AI co-pilot using his proven templates and prompts.
Ben and I talked about:
(00:00) The current state of the PM job market
(02:07) The exact structure of the product sense interview
(08:58) How to build your AI interview co-pilot step-by-step
(12:36) Live demo: AI co-pilot walking through a Meta interview question
(13:33) Why Ben designed the AI to be the candidate, not the interviewer
(25:23) How to prioritize the right user segments and problems to solve
(32:13) The biggest mistakes candidates make when challenged by interviewers
This episode covers how to build a simple AI co-pilot from scratch, but you should also check out Ben’s advanced interview AI co-pilot: https://www.benerez.com/copilot
Get the takeaways: https://creatoreconomy.so/p/how-to-build-an-ai-co-pilot-to-ace-your-pm-interviews-ben-erez
Where to find Ben:
LinkedIn: https://www.linkedin.com/in/benerez/
Website: www.benerez.com
📌 Subscribe to this channel – more interviews coming soon!
Well, spraying and praying doesn't work in this economy. I actually recently had someone land an L6 uh role at Meta, and they did 50 reps with the co-pilot to get ready for their interviews, and they said it was the most valuable part of their prep. You could ask it like, "Hey, you're a PM at Meta. How would you design a product for volunteering? " So, it kind of maps ecosystem players, picks an ecosystem player, and then it checks in with me before it starts defining segments. that picks the first problem. Time matching friction, highest frequency and severity. I don't think it's ever been more competitive to try to land a PM job, especially a really good one. And the reason for that is companies have been extremely picky in particular about who they want to hire. So, the best things that candidates can be doing right now to stick down in this market is All right, welcome everyone. My guest today is Ben, a former Met PM who has helped hundreds of people land their dream PM job in tech. really excited to talk to Ben about state of the PM job market and get him to show us exactly how to build an AI co-pilot to help you prepare for PM interviews along with some real examples. So, welcome Ben. — Yeah, thanks for having me, Peter. Really excited. — Hi, Dave. So, so let's have a reality check first like um how is the PM job right now? Is it good or not so good? I think there's this like dramatic rise in the number of PM openings and at the same time I don't think it's ever been more competitive to try to land a PM job especially a really good one and the reason for that is companies have been extremely picky in particular about who they want to hire and they're looking for basically people that have done that job before. So it's not uncommon for people to get to the finish line and they're up against two or three other people who are it's like one person's a 99 out of 100 fit, one person's a 98 out of 100 fit. You know, it's like everyone is like really close and they have to make these margin calls. So, uh, the best things that candidates can be doing right now to stick out in this market is really get warm referrals and to position themselves as the best person for that job from the very first point of contact and tell that story the whole way and be really consistent about it. — Okay, awesome. So now let's talk about like the interview process and maybe
The exact structure of the product sense interview
let's kind of focus in on like one type of interview probably the most common type which is the product thinking product sense interview and um you actually wrote a really great post for Lenny's newsletter about this. Uh maybe we can just cover like the structure of the interview at a high level first and maybe you can share what that looks like. — Yeah totally. So at a high level, a product sense interview starts with a question that's asking you, you know, uh, tell me about your favorite product and how you would improve it or asking you to, if it's a meta interview, it could be, you know, you're a PM at Meta, build a product for volunteering or gardening, right? It's something that starts very open-ended about how you might improve something or enhance something in some capacity. Okay? And the mistake a lot of people make is they jump right in. Okay? Okay. And I go into a lot of the details of the pitfalls of how people approach the interview in a way that is not um conducive to getting giving the interviewers the signals that they need. Um what the interviewers really want to see, especially at the meta scale, is they want to see a lot of structure and really succinct communication and how candidates present themselves in these conversations. and they want to know that the candidate shows up and has a very clear game plan on how they're intending to get all the way from super vague prompt to a very concrete solution that can get shipped within a 35minut time box. So the game plan that we're looking at here is um the framework that I came up with including some recommendations for how much time to spend on every section and it really starts with um laying the foundation for the motivation of the product that you're talking about or this kind of enhancement. So like with volunteering, you don't really know what the product's going to be yet, but you know it's going to be some meta product for volunteering. So it has to do something with Meta's mission as a company and Meta's kind of like motivations and its user base. So just like getting into the underlying reasons why volunteering might be interesting to Meta. And then if you get into segmentation, here's where we walk through how we're going to pick a target audience that we're going to focus on for the exercise because you can't build for everyone at once, nor should you. Um, but they just want to hear how you think about narrowing in on a specific audience that is both meaningful in size and underserved in some key way relative to what exists in the market at the moment. So, you pick a segment and then you want to identify problems that um I like to set a persona for the segment just to flesh them out with like a name and an age and where do they live and like what do they do kind of thing really quickly and then identify a journey for that persona. Come up with some problems. I like to come up with a problem at the beginning, middle, and end of that journey. So three mutually exclusive distinct problems. Um and then to uh pick a problem based on which one comes up most frequently and uh has the highest severity score. So it hurts the most when it comes up. And then to um then finish and really land the plane by talking about by brainstorming different solutions to that specific problem you picked and making sure each of those solutions really does solve it in a very different way. And then talking through uh why you want to pick one of those solutions over the other ones. and then finishing by describing what the first version of that experience might look like. — Yeah. So, I think a couple of calls about this, right? And I I did a lot of this both as an interviewer and interviewee. I think number one is like not really like a casual conversation. It's kind of he about this. It's kind of like a game and you have to be really structured and there's very limited time to actually go through all this, — right? So, you kind of have to approach it as a game. — It is a game. It's a game with the clear rules and uh something I mean I'm not too uh I'm not at this point embarrassed to say it, but earlier in my career I did approach Product Sense interviews when I was just getting started as a PM. Like they were real conversations. I remember a very vivid and painful interview about uh asking me about my favorite product and how I would improve it. And back then it was Zipcar. I still love Zipcar. Um but I just dove right into like things that frustrate me about the Zipcar experience and how I might solve them. And in hindsight is completely the wrong approach. which I didn't know I was playing a game that has very clear kind of like rules and expectations to it. And I don't think that the interviewing teams uh often despite their best efforts, I think for some reason they fail to kind of get through to the candidate and let them know that this is not a regular conversation. I was always surprised that people would show up and still behave like it was just a casual conversation like did no one prep you for this, you know? — Yeah. And and the other thing I noticed is that uh your your chart here that the solution is like less than 10 minutes of the interview, right? Like most of the time you're actually talking about the users and the problem. — Yeah. — Correct. — Because what they're evaluating is not what you actually come up with at the end. I mean that's part of the signal, but what and there's no right answer to any of this. What they want to know is they want to understand your thinking. how you do this. Because if you think about a PM, they're going to come into your company. you're going to throw them into some space and they're going to have to like the thing you're betting on is their ability to reason through the strategy and reason through the problems and make really good prioritization decisions. So, I think this interview format kind of evolved out of a need to try to create some proxy for how PMs could handle that kind of ambiguity on the job. — Yeah. And it seems like the core thinking they're trying to evaluate is like can you brainstorm maybe like you know three solutions or like three can you brainstorm three things and then can you like logically talk about how you prioritize those three things right like that applies to the users the problems and also the solution that seems to be yeah if you think about solutions it's kind of like hey there's an exist there's a theoretically an infinite set of Lego blocks but it's not really infinite because we said we're going to build this within the the meta ecosystem. So like we've got Instagram, we've got WhatsApp, we've got Facebook, we've got Quest. — Given all these Lego blocks capabilities, AI is really interesting these days. What are you know we have a strength in community like we've got four billion users. We've got, you know, there's all these things that you could leverage. Like what would you build, right? And so they want to hear your ability to just really come up with three completely different ways of trying to solve a problem. I think three is the magic number. Four is too many. It's going to eat up too much time. Two doesn't really show as much breadth. So, I think three is kind of like a happy number for all of these, by the way. Three segments, three problems, three solutions. Um, and then you want to really demonstrate that you don't you're not the kind of person who just gets locked into one solution and cannot see anything outside of that, which is something I'm sure you've seen in some of the PMs you've worked with in your career or just people in general. It can be kind of challenging to collaborate with people who just get really locked into like one way to solve a problem. — Yeah, definitely. Yeah. uh like I think I always have uh like one of my principles is like you got to seek the truth right you know if uh your data services like your solution is not correct just like you know just admit it okay so let's talk about um now we have this overall structure let's talk about like you know this podcast is about
How to build your AI interview co-pilot step-by-step
AI so let's talk about how we can get AI to help us back or think through these process interviews — yeah so I think the there's a bunch of people exploring ing different applications of AI to almost everything right now. But the way that I've chosen to kind of spend my time in the context of this interview prep stuff, is to try to take stuff I've been talking about for years, including like lectures and frameworks and templates and mocks I've done, and to try to figure out a way to create a resource for people to get more reps with the materials. Because ultimately you could really grasp the frameworks, but if you don't get practice with these frameworks leading up to your interviews, you're not going to be in good kind of like I think of it like match day condition. Um you'll understand the rules of soccer, but you're literally not going to you're going to get tired after 5 minutes and you won't be able to play the game. So you want to get into like fitness level for the game. So, one way to get fit for the game, I think, is to just see as many examples of us of working through these questions by following these frameworks as possible. And the status quo, like the problem I sought out to solve with AI here, is that if you think about a question bank like Lewis Lind's question bank, which we can link in the show notes, that's like the comprehensive bible for questions for these interviews, especially product sense questions. And if you start working through them, there's like almost 3,000 rows in that spreadsheet at this point that are real submissions from people going through interviews. So there's like an infinite number of questions you could work through. So um you there's not actually examples floating around on the internet of what a good answer looks like to all those questions. So you kind of don't really know how to calibrate yourself if you want to work through five or 10 of them if there's not a good YouTube video of someone that's worked through it. So what I wanted to build is a way for you to basically see what a good response looks like that internalizes my frameworks for any question that you could come up with from that question bank. So the way that I got there, I think I'm still screen sharing here. Um, and people can do this part at home if you want. So I went much deeper than this, which we can talk about with a co-pilot that I sell. that if someone just wanted to take this Lenny post which is a free resource on the internet and um they want to train their own uh claude project. So cloud projects are really powerful for this. You could do the same thing in chat GPT. Uh we have both of them pulled up and ready here. But if you wanted to create um a set of uh instructions that basically gets you the ability to see what a product sense response looks like for any of the questions in the question bank. Um then you could start by just saying hey like I want you to create some instruct I want to create a project that allows me to practice. I want you to have instructions that internalize the framework from this link and then you get an initial response and just ask it to make it 10 times more specific so that the AI really follows the instructions and then you get another artifact and so we took this artifact uh these instructions and use them as project knowledge. So the project knowledge is basically going to tell the AI how I want you to handle tackling a product sense question. Okay, these are different than the ones in my — co-pilot. This is the prompt that uh AI came up with based on your post. — Correct. You just ask it to format in markdown because usually that's like the best format for these. Um, and then the other thing I gave as project knowledge is I took my publicly available, totally free product sense interview template and I downloaded as a PDF and I attached it to the um the project as a project knowledge file. Okay. So, what that gets you to is a point where you
Live demo: AI co-pilot walking through a Meta interview question
could ask it like, "Hey, by the way, I like you're a PM at Meta. How would you design a product for volunteering? " Um, I really like Willow Voice is a really good voice dictation tool. — Got it. Yeah. — So, um, you can see it's basically starting by stating some assumptions. It's walking, uh, the walking us through its game plan for the exercise, which is, you know, we're not going to get too deep into critiquing the content itself because I don't think we have the time for it. But the basic structure of state assumptions and then walk me through a game plan before getting into any of it is spot-on with kind of what I recommend doing. And this is important for candidates to remember, okay? Like so every time you work through a question, being reminded that this is where you need to start is really helpful. And then all you have to say is like yes, and then it's going to get into talking about the product motivation. So the mission.
Why Ben designed the AI to be the candidate, not the interviewer
So, so um why did you make the AI play the role of the candidate instead of the interviewer? — So, looking back, uh when I was getting ready for my Facebook interviews in early 2020, the way that I ended up getting ready for those interviews was going on YouTube seeking as many mock interviews as I could find. And a lot of them were available through a company called Exponent um that publishes a lot of these interviews. And the way I chose to prepare and again I had no course. I had no co-pilot. I actually didn't even know anyone. I knew one person that had been a PM at Meta and they were gracious enough to do a mock with me, but I had literally no idea of how to prepare. So I'm like, I'm just going to watch these and I'm going to try to almost like reverse engineer in my head what the right framework is for approaching these. So I in hindsight, I was pattern matching. I didn't use the term at the time, but I was trying to figure out the patterns. And so I would see a question get asked. I would pause the YouTube video. I would work on it on my own on my legal pad. And then either when I got stuck or when I was ready to kind of, oh, I'm kind of curious how they did it, I resume the video, and then I compare what I came up with what the candidate in the mock came up with. And presumably is a good candidate because they're doing a YouTube video, so they must be good. Um, and so I kept doing that. I did that with probably like five to to I think it was like seven or something like interviews uh for Product Sense. And I did the same for analytical thinking. Back then they were called execution questions and ended up coming up with my framework. Now what I wish I had was the ability to ask to like almost like choose my own adventure while I was walking those and ask it. Oh, why did you choose this segmentation or like those problems or like how like why that kind of mission statement? You know, why this northstar metric? Why not this one? Is there a different guardrail you could pick? Right? Like I found my curiosity kind of being like I wish I could just like ask them more questions about this. So to tie this back to your question, I built the co-pilot to simulate the candidate because that's kind of what I wish I had when I was getting ready for the interviews is the ability to like really push pull the candidate in the mocks that I was working through in the directions where I was unclear on. — So is the right process here? So the first you got to build the co-pilot first, right? the prop project and then maybe you get some questions from Louis Link Spank and then try it yourself and then uh you see how the co-pilot answers it and you kind of like learn and compare answers that idea. — That's exactly the idea. Yeah. And um and I did we can link these in the show notes too. I have two free lectures on Maven on how to uh practice for uh product sense analytical thinking interviews with AI. And in each of them, I kind of share a framework for how to think about the workflow for using AI to practice, which includes pick a question, work through it on your own, compare your approach to what the AI does, identify the deltas, throw away the ones that don't make sense to you, but if something actually does make sense to you and you like what the AI did better than you did it, incorporate it into your framework, and then work through a new question and repeat it. — Yeah, you can probably like just use — Yeah, his question mark is quite real questions. But actually like you can probably just flip the same project around be like hey now I want you to be the interviewer and like you can just use your you can use wiz before whatever voice app and then have the AI kind of interview you right. — Yeah and I actually just shipped an update on chat for my co-pilot materials that support chat GPT voice mode and you could ask chat GPT to either be the candidate or the interviewer and go for a walk or sit at your desk and go through an interview and have a back and forth. I find that the conversational voice mode dynamics on chat GPT are excellent, but if I was just using written down text um I I think Claude is still better at following instructions than the written version of Chad GPT. — Yeah. That that'll be interesting to see. Uh me or you go on a walk and then the neighbors are like, why is this guy talking about you know what kind of user segments he's trying to do? — Yeah. But again, I think that I tell people like you have to know where you are in your preparation phase. And so if you're in the very beginning parts, like where I was when I was watching all those YouTube videos, trying to just figure out the patterns, seeing as many examples as you can of someone working through these questions or listening to the questions, in this case of the AI, will help you start to notice the patterns and internalize them and sleep on them. And then every day, you'll be a little bit more comfortable with the frameworks. And then at some point when you're ready to practice, you could shift and turn the table around and now you're mostly in the hot seat generating the responses and getting evaluated. — Yeah. It's kind of like uh you know trying to build an AI thing and you're trying to load a context. You got to load your own context first with a bunch of example you know interviews and stuff, right? Yeah. — Totally. So, I don't know if we want to keep kind of like working through this in detail, but um if it's interesting, we can kind of do a quick comparison of the exact same instructions, but uh slightly modified for Chad PT because of the 8,000 character limit, which by the way, Peter, I do not know for the life of me why OpenAI cannot extend that the character limit on its project instructions. It doesn't make like OpenAI is ahead of Cloud in so many ways, but this was one that just made no sense to me. Um, but yeah, I think — if we can pull this in, and by the way, the same limitation applies to GP custom GPTs, which also is frustrating because I built a GPT. Um, so yeah, here it's doing the same thing. Stating some assumptions. It's interesting both of them chose to pick four assumptions. Then game plan sounds good. It's checking in with me. So, you can kind of see that it's it works for both like I like you know chachi is great but like for writing I prefer cla as I feel like it feels more human in some ways and form like chachi tends to like do like bullet points and like be a little bit more robotic. — So — totally — I don't know if you notice that big difference. — Yeah. — Uh you know what I actually found to be it's kind of interesting but like I think the best experience I was able to get going in written form on chat GPT was in an incognito session because it disregarded all of the memory. other chats. It disregarded all the instructions. So, it's like pure essence of like chatbt following exactly what I wanted to. And because Claude actually doesn't, — you could argue that Claude is not distracted by what you've discussed in other chats because it doesn't have crosshat memory and it doesn't get distracted by real memory because it doesn't have memory. — Um, then theoretically, Claude is always going to be more focused than chat GPT. — Yeah, that that's true. I mean I do like the memory feature, but for this particular use case, it might, you know, actually come be bad. Yeah, — totally. — Okay. So, it's it'll basically walk through like the users and the problems and like you can go back and forth with it, right? Yeah. — Yeah. And I actually think that in this case, I think Claude I like kind of Claude's approach um a bit better. We got into segmentation. We're a little ahead here. Um so, it's starting to kind of talk about the ecosystem players. Um and then it chooses the individual volunteers. Um, and then it's actually not following my instructions as perfectly as I would like because it starts to get into the segmentation before check before I check in with it. — Um, so you see this part, — yeah, — where it says, "Does focusing make sense before I break them into segments? " This should be a natural break point in the conversation. So if we kind of look at the way that my um if we kind of want to see what like the Lamborghini version of this looks like. Um, so we can ask the same exact question for my co-pilot has like way more detailed instructions that I spent like many hours creating and has more project knowledge files. Um, so it starts with assumptions as well, but they're more focused and I think well formulated. Walks through its game plan for the exercise. Um, counts how much time it spends because time management is really important. And then we could say yes, this plan sounds good. And what you'll see is that as it gets into like the motivation and the mission statement, it's going to I think be much closer to the way that I coach this. Um, so kind of like touching on the key points. I know we don't have time to talk about all the exact content, but I want to see like for example the difference in how it handles segmentation. I think that should just give you a good grasp of some of the deltas here. So it kind of maps ecosystem players, picks an ecosystem player, and then it checks in with me before it starts defining segments. So I'm like, "Yes. " And that's important because if you pick the wrong ecosystem player in a real interview and get into segments just to find out the interviewer is not aligned with you on which ecosystem player you picked, that can really derail the interview. — And let's try to stress uh your narrative like because I know in a real meta interview like I haven't done a meta interview for a while, but I remember like the interviews like pretty stonefaced and like they would actually like push and pro you like they would just say like yes all the time, right? So, so let's actually try to challenge like one of these assumptions that is making and see how it does. — Yeah. — Uh, so basically I can say something like these segments feel pretty generic. Can you help me just understand how we can make them maybe a bit more realistic, which is a nudge you could expect from an interviewer? — So now it's getting a bit more specific. Yeah. — Um, so can you help me understand the fundamental difference between these segments and why there's no overlap between them? Ah, now you now it's getting there. So it's going towards like what's their primary approach? — Yeah. I volunteered using that — as the core for the segmentation. You always need a core for your segments and then you want to kind of flesh them out a bit more. — Yeah. — So yeah, that's an example. And yeah, go ahead. — And then did you prioritize the segment or — um Yeah. So with all of that said, which one would you pick and why? So, it takes the score of the reach and the underserved degree, but also ties it to Meta's core strengths and behavior patterns. Um, things Meta is uniquely positioned to do. — Yeah, this is pretty good. — And it also can create network effects. Um, if we get it right. So, it creates a persona. Alex, 29-year-old software engineer who wants to help the community but finds traditional volunteering sign processes too rigid and timeconuming. So, Yes. So, basically now it's going to get into mapping out a user journey for Alex and then coming up with some problems along the way. — Yeah, it didn't take a couple minutes. Did it? — It's just so fast. I'm telling you, I actually I actually recently had someone land an L6 uh role at Meta and they did 50 reps with the co-pilot to get ready for their interviews and they said it was the most valuable part of their prep just cuz he described it as like he's like I don't know if any of you play basketball, but like if you're shooting hoops and you have to keep chasing down your own ball every time that's really slow, you don't get as many shots off. But if there's someone waiting under the basket and just keeps giving you the ball back, you get way more like shots. — reps. Yeah, way more reps. — Um, I try to get all the way down to actually come up with solution like Yeah, the solution. Yeah. — Yeah, let's do it. So, we can just say yes. So, basically the problem just for those listening um the problem it evaluated is like there's time matching
How to prioritize the right user segments and problems to solve
friction. So, he struggles to find opportunities that fit his schedule. There's one that's um there's people that expect a lot of commitments that don't match his flexible approach. And then um Alex rarely learns about the concrete results of the volunteering. Great. So it's like doesn't really know what happens after he volunteers. It picks the first problem. Time matching friction, highest frequency and severity. It prevents him from volunteering in the first place, which means we don't get to solve any of these other problems for Alex if we can't solve that first problem. Um and then it aligns with the mission of that we stated earlier at the beginning. Excellent mapping back to the mission, which is really important for an interview like this. And then the solution it solutions it came up with is a um AI powered system that learns his availability patterns, interest and location preferences to proactively surface relevant opportunities in his feed. The second solution, a system for lastminute volunteering opportunities where organizations can um post urgent needs and users get real-time notifications for immediate impact opportunities. And then the third solution, break down traditional volunteering experiences into smaller, more flexible chunks that can be completed individually or combined with clear handoff systems between volunteers. — Yeah, that is pretty uh that is pretty creative. I wouldn't have thought about solution three. — Yeah. And you know what's interesting is like sometimes people ask me how can I like just understand more ways to come up with solutions. would be like, "Those solutions are pretty cool, but I think it would help me a lot if I saw three other completely different solutions for solving this exact problem. Can you come up with three distinct solutions? " Three new distinct solutions. — So, you can kind of almost like find the bounds where it starts to get hard to get more creative. But here, it came up with a new one. Create volunteer groups where Alex and their friends can indicate when they're free. and the system will match them to opportunities to go volunteer. They can bank volunteer hours by contributing when they're available and then organizations can like withdraw from that time bank when they need help. So that creates like some kind of buffer in the system. There's a direct integration with the calendar where he can just mark available for volunteering times and organizations can just book those directly. Right? So it's coming up like I mean I think these are six pretty different solutions to the problem. I'd have to take more time and map them back to the exact problem. Um, — yeah, — but I could say like — out of the six solutions that you proposed, how many of them directly solve the problem that we prioritized? — Sorry, go ahead, Peter. — I was going to say like because it actually lists the impact and cost, it should be pretty easy to prioritize the one. — Yeah. So, we can basically So, it says, okay, you're absolutely right to call this out. Only two out of the six directly address the core scheduling friction. So which solution would you pick and why? And then you get to the end of the exercise. We'll see what it picks. Smart volunteering matching. And then yes, I want a V1. I think that's great. So it kind of outlines how the initial experience would work and then I also kind of programmed it to come up with a couple key risks and how it would mitigate them. — Yeah, that is like extra quite a to actually talk about all this. — Yeah, you can get asked basically like what would success look like for the first version of this is another flavor of it. You're not really supposed to get too deep into metrics like an analytical thinking interview, but some verbalization of what how you would define success for it would be helpful too. — Yeah. — Awesome dude. So, so let me uh this is awesome and we'll put the link to this co-pilot in the show notes. This is very thoughtful. So, Ben, what do you think about we just talked about using AI to prepare for interviews? What are your thoughts about using AI during the interview? — Yeah, so I have a slide on this on the two lectures I mentioned um about how to use AI to practice for product sense interviews. And basically, I think that the use of AI during interviews to me is uh something I'm not comfortable with unless there's like an explicit expectation with the employer or with the company that you will be using AI during the interview cuz I think we're probably going to see more and more companies start to screen for AI fluency during interviews. And in that case, maybe there's going to be an expectation of screen sharing of some kind in which case AI won't be like a way to cheat on interviews. It'll just be part of the happy path of demonstrating competency in interviews. But most companies, at least the kind of interviews I'm coaching, are still expecting you to not be using any uh any aids and any assistance to get through the interview. And so what I tell people is you don't want to cheat with AI during interviews because one, it's unethical. It's cheating. two, it's actually going to potentially hallucinate, in which case you might say dumb things and you need to not say dumb things during interviews and AI is not perfect. So you can't say things that you can't defend with your own thinking otherwise it's going to go poorly. And then another reason is I think it's genuinely like interviews are about finding match ma a good match and a good fit. So if you cheat to get an interview, even if you get away with it, you're getting a job that maybe actually you're not the right fit for that job because you cheated to get there. And I think that's not going to that's a short-term strategy serve you in the long term. And then the last thing I say is that um relying on AI for interviews is not it's not futurep proof. And what I mean by that is I think it's not a sustainable strategy because companies there's a moment right now where there might be an arbitrage opportunities where people can get away with using AI and companies won't know about it. But that window that arbitrage window is I think going to close in the not that distant future. So I just don't think it's a sustainable strategy. — Yeah. I think uh as an interviewer I think it's pretty easy for me to tell someone using AI like you know you're either rereading some something or maybe I can just ask you a curveball and type something like it's pretty easy for me to tell if that's happening. I actually had one uh interviewer tell me that they knew a candidate was using AI because the candidate had glasses and they saw a reflection off of their glasses that they saw chat GPD pulled up on their monitor and they're like I know you're like I literally see you using chat GPD right now. — Wow. Okay. Yeah. — Some passes. Yeah. No, you have nothing to hide. It's fine. — Yeah. So let's wrap up by um with some like clone hosing tips. I mean you you've like uh helped hundreds of people at this point uh you know and we just talked about the structure and like you AI but like what are some what are some of the points during this process
The biggest mistakes candidates make when challenged by interviewers
pro process where people struggle the most right like beyond just like this the standard tips about like not jumping to the solution like what have you observed? — So I I'll say that when it comes to like specific sections of the exercise every single person is different. Some people have like really hard time with coming up with a mission statement for some reason, whereas other people that's a breeze and they really struggle with segmentation. Some people are cool with both of those, have a very hard time coming up with unique problems. Some people are fine with all that, have a hard time coming up with solutions. So, I think like everyone has their own weak points that they have to work on. But if I were to kind of like find a theme of something that I think can destabilize people or uh really kind of like change the trajectory of an interview that's going okay and poorly. It's um interviewers will nudge and what do I mean by nudge is they will ask clarifying questions or just make sure they follow your thinking or kind of what we did and just ask like yeah but like can you help me understand that a little bit better like some kind of challenge of some sort to like the content and strong candidates who can kind of keep their composure and kind of know that the interviewer is actually trying to help them versus trying to trip them up will react really help uh really positively and I think use those as nudges to strengthen their answer whereas I think some candidates can really get it can almost cause them to choke up a little bit and get stressed because they feel like things are not going well and that creates a sense of anxiety. So I think to zoom out managing your own psychology and I just think like remembering that the interviewer wants you to succeed and that they're not there to trip you up but instead they're there to help you or at least try to set you up for success I think is really important as a candidate. — Yeah. And why is an interview why you just like see like agency PMs dude hire the right person right? — So I can tell you that if I were to like score the level of enjoyment that I got out of interviews uh at Meta uh when I was an interviewer there the most enjoyable interviews were the ones that were going well. Okay. And the reason they were more enjoyable is I could just take my notes. I knew exactly what I was going to do when I was done with that. I knew I basically had I didn't have to pull the content that I needed. It was just being given to me on a silver platter by the candidates. Okay. So, um that's fun. Okay. What were less fun interviews is when like it's painful to try to get the signal I need because the candidate is like not prepared enough. They don't know what the structure is they want to follow. they don't know what they want to. It's like I'm trying to do my job and they're not helping me do my job. So now I have to go into overdrive to do my job in the interview to get the signals I need. And so um so that's why the interview is rooting for you in a way is because like basically the better you perform the easier their job is. Yeah, I think that's really good advice and I think that especially for experienced PMs like some people like me with a lot of experience might just be like hey I'm going to win this because I already know how to do a PM job and that action totally might back backfire like no matter how experienced you are you got prepared for this stuff right — and and you can think about this for like your colleagues too like your your collaborators at work like if you've been working with a designer on a really big presentation and that designer gets in front of the room you're rooting for them to crush it because that's going to be good for do, right? You know, so it's like it's the same kind of thing is like if you have an outcome in mind that is in your benefit and there's someone who's got the ball that and it's their move, you want them to do well so that you do well. — Awesome. So, so where can people find your training and your co-pilot? — There's a link in my bio on LinkedIn. Uh so if they want to just go to Ben Arez on LinkedIn, um I replaced my uh my link from my course to the co-pilot for the next couple months cuz my next cohort will be in October. Um so they can go and find the co-pilot there. But then if they want to learn more about my course, they just got to go to the featured section of my LinkedIn and they will see uh all the information there. And then if they just want one place to go, it's beneras. com. Um and the key kind of links to the various things that I've been up to are there. — Great. All right, man. Well, thanks so much for sharing your knowledge uh both for free and also with your course. Uh I think you know preparing for interviews is very stressful and um people worry about their jobs if they everything and um you're definitely doing a great service to the whole community. So thank you. — I appreciate that Peter. Thanks for having me on and you're doing a really good service for the community as well by doing this podcast. I'm a big fan. — Thanks. Yeah.