AI is ubiquitous on college campuses. We sat down with students to hear what's going well, what isn't, and how students, professors, and universities alike are navigating it in real time.
0:00 - Introduction
0:22 - Meet the panel
1:06 - Vibes on campus
6:28 - What are students building?
11:27 - AI as tool vs. crutch
16:44 - Are professors keeping up?
20:15 - Downsides
25:55 - AI and the job market
34:23 - Rapid-fire questions
- I think AI and especially how students use AI, it's very telling of those motivations. You know, there are some students who are using it to complete work for them, you know, to do it on their behalf. And there are some students who, you know, are staying away from AI, or using it proactively. They're using it in ways that reinforce their learning. It's our responsibility now as students to, you know, use this tool to, you know, achieve your own individual outcomes.
- Everyone is talking about how AI is changing education, but we figured, what better way to learn about these changes than by asking actual students? My name is Greg; I'm from Anthropic, and today I'm joined by four university students who are here to give us the inside scoop. So, why don't you all introduce yourselves? - Hey, my name is Zain. I'm a final year student at the London School of Economics, and I study accounting and finance. - Hi, my name is Chloe. I'm a junior at Princeton, studying psychology and computer science. - Hi, I'm Marcus. I'm a senior at UC Berkeley, studying econ and data science. - I'm Tino. I'm a second-year grad student at the Thunderbird School of Global Management at Arizona State University, and I'm studying a master's in digital transformation. - Amazing. Thank you for being here. - Thank you. - So, let's start by setting the scene.
What are the vibes like on campus these days with AI? How are people thinking about it? - Yeah, so I did a survey not too long ago on how students are using AI, and I saw that, you know, 90% of students are using AI in their day-to-day workflows, using it to summarize lectures, using it to answer problem sets, to help give feedback on assignments that they had written. And so, really a diverse sort of, like, use case for AI within students. It's having an impact. Universities are having to manage that. We're seeing changes in rules and regulations. We're seeing, you know, some courses ban it, other courses encourage it. And so, students are in a bit of a gray-zone right now where they may not know how to use AI. - Yeah, I also concur. There is a lot of chaos in understanding AI and what role it'll play in universities, but at the same time, there's a lot of energy surrounding it, especially being at Berkeley and being so close to AI hype in the Bay Area. I also agree that, like, over 90%, if not basically everyone uses AI in some way or form, mostly in the form of chatbots, yeah, like, summarizing lectures, doing assignments, answering questions where or when teachers, TAs, like, can't answer them for you. I will say there is also a lot of confusion on, like, administrations or, like, a professor's end on how AI can play a role in the classroom, and we're slowly seeing some changes around that. - As business students, I see even myself and my colleagues, like, we use AI for a lot of different things. We use AI to understand and analyze business cases, do market research, just come up with financial research, as well, so people use that for that as well. People also use AI, as well, to complete quizzes, you know, like, when you don't have time, 'cause when you're a grad student, sometimes you've got multiple jobs that you're working, and you don't always have time. So, sometimes, you can see someone who just, you know, quickly submit answers and everything. So, that's the bad side of it that, when you're in grad school, you know it's supposed to be, like, a time for you to expand your critical thinking, be someone who is, like, more decisive, someone who has, like, substance in how you make decisions. And so, that's, I think, the bad side of it. - Yeah, I would say definitely the vibes are, like, really chaotic right now. Both I guess in a good and bad way. I think the good, obviously, like, Zain said, there's a lot of exploration and cool projects and stuff popping up. The bad is because everything is such a gray area. It can be very difficult to stay resilient and hold yourself accountable. It's very easy to just be like, "I'm just gonna give up and feed this all to AI and not do any of the thinking. I've noticed that there's a lot of tension, as well, between, I guess, like, the line of how much is over-relying on AI, or how much is it good to actually have, like, an actual cooperation between those two. And I have also noticed that some people are really into it, so they use it a lot in terms of, like, all of their different workflows, while others, like my humanities and maybe some social science friends are a bit more hesitant and have a bit more concern. So, there seems to be a growing, like, identity, polarization effect that I think will be really interesting to see how it goes. - I'm curious, you say that they're hesitant. Are they hesitant but still using it pretty regularly, or is it a mix of some are using it a lot, some are not using it at all? - Yeah, great question. I think there's a spectrum. A lot of, especially, like, the pure humanity students have just completely opted out, I think, because, often, in their classes and research, there's a lot more of just close reading, while other, I think, like, for social science, I have noticed a slow trend where they're trying it out more and just seeing AI being applied beyond just, like, pure computational or, like, machine learning, like, context, which has been cool. - And a lot of computer science and also, like, other engineering classes, it's still kind of a taboo to use AI. I mean, in application these days, we're using a lot of, like, AI coding assistance to build actual projects outside the classroom. But in the classroom, we're still using, like, VS code and blocking out these AI features because professors, at least at the moment, are still kind of discouraging it. But we might see a shift in the next few years. I mean, I know Stanford is beginning to have a course about learning to use AI tools in, like, software development and engineering. I think that's the number one, I guess, breakthrough with these AI tools is that the accessibility and barrier into building something, like a project or software in general, has gone down a lot. And especially with a lot of courses, like, with, like, Claude and, like, the developer docs, for example, it's been really helpful in teaching folks who don't come from, like, computer science background, like in political science or in, like, psychology, or even something like math, be able to build their own projects on the side from, like, ideation to, like, a working prototype that's on a website or some kind of app deployment within the span of, like, a few days.
- Yeah, I've seen that a lot at my university where students who, you know, don't typically have the confidence to go and build with, you know, raw code have now, you know, started using the terminal, for example, which is incredible to see. And, you know, Claude Code, for example, makes that so much more accessible, so much friendlier, which I think has been one of, like, the most crazy changes so far. Like, myself even, I don't have a computer science background, but I'm comfortable in the terminal now, which is crazy, and I've seen it within societies as well. So, we have a number of societies at LSE, and they each have, like, an Instagram page; pretty basic, easy to put together. But now, we're seeing societies have websites, and these websites have a load more information, and they're building them with Claude Code, because it's just so much easier now. - So, it seems like the AI transformation for students has already happened, and we have mixed feelings about it. One thing that you all share is that you all are Claude campus ambassadors, and you are each leading a student organization called the Claude Builder Club on your campus. So, first of all, maybe can one of you, like, give a quick summary of what it means to be a Claude campus ambassador, and then the club that you're leading? - Yeah, I mean, as Claude campus ambassadors, our number one job/role is to be the point of contact of engagement between what Anthropic and Claude is offering and between that and students, and basically being a facilitator for that on campuses. - Cool. And since it's a club about builders, what are people building? What are you seeing happen at your clubs? - A lot of cool things have been built. I'll reference an example from a recent Vibe-a-thon I did. I think a lot of the most fun ideas is not the most technically-savvy ones, but the ones that really start with human emotion. So, one that was really cool was there's called the Princeton Prospect. There's, like, kind of a bucket list of things people would like to do before they graduate and kind of gamifying that through a leaderboard. And the best part of it actually was, the winning team, they were just a bunch of freshmen and they were all roommates, so they just came into this for fun, and with that human insight where it's able to build out something that resonated with everyone, and that was something really cool that I enjoyed seeing them build. - I think one cool tool that my friend and I built was this place where you could basically put in your lecture slides, and it gives you sort of, like, professor annotations down the side of each slide. So, it's so cool. I've been using it so much for, you know, just revising through content in preparation for end-of-term exams, and it's so good because it kind of preempts my questions, and so I've prompted it such that it knows that I want to know the definitions of certain things on the slides. The slides can sometimes be a bit abstract and missing context. So, adding in the context on the side. - Did you get a good grade in the class? - We'll see. - I think one of my favorite things that someone has built with AI is, it's an app called Courseer, and we have this challenge where, like, the most, like, amazing fun classes, like, when it's time to register for classes, they just run out so quickly, and you can, like, wait weeks and weeks to get, like, a seat in that class. So, what they did is they built this AI, and you can, like, just input the course that you want, and then it's gonna alert you the moment a seat is open in that class, so you can register for it. - Oh, I like that at our school. - Yeah, instead of you, like, going back and checking every day, class search, "Is this class available? " - You get a notification that you jump on. - Yeah, you just jump on and you get a seat. Yeah, I love that. - I need that. - Your next project idea. - No, exactly. It's actually funny, we have, like, a shortage of seats at university, at my university. I'm talking about, like, actual seats, like in the library for example. And so, again, my friend built this amazing tool that basically scans all of the data that you can get the data on, you know, which classrooms are free. And so, it basically points all the free classrooms and tells students, you know, if there are no seats in the library, then go to these ones. And again, like, non-technical student building this, which is insane, unheard of, but, you know, these are some of the possibilities. - Yeah. - I've seen in the past few hackathons or entrepreneurship classes where a lot of students have been looking into, like, healthcare use cases, mixing computer vision with a Claude API to interpret a person's, like, emotions for, like, a mental health use-case, like, signs of stroke via, like, a camera on, like, someone's phone or, like, a separate, like, medical device. Or even signs of, like, dementia for example. And all of them has been really interesting. - It's so cool that people are spending their time doing that in school, 'cause that is kind of the magic of being a student is you do have time to just work on ideas and try new things and come up with projects that are just for fun. They're just the side projects. - Yeah, absolutely. - Yeah. Cool. So, let's talk about learning with AI.
I think one of the more tricky parts of this is that, you know, AI can be a tool to help you learn about anything you wanna learn, but it can also be used as a crutch to maybe prevent learning if you lean on it. So, I'm curious how you each personally balance that and how you see students balancing it, and if you see students at your university balancing it well. - I think initially what we noticed was that even, like, amongst our classmates, at first it was just like, whatever the AI gives you, that's what you put. And then, it's, over time, attitudes have started to change. We're like, "Let's just put a little more effort," and not even just, "let's just put effort in what we're putting together. " Because let's say you have a group project and they're, like, four or five people on that group project; everyone gets a different part. And if everyone just does the first thing that AI gives them, that's not gonna produce a very good project at all. - I think one thing about AI and education is that it's very telling of students' motivations, like, why you're at university. I think students, you know, you can typically group three objectives for university. The first I would say is to learn, to, you know, to deepen your understanding in your chosen topic. I would say a second objective is to, you know, position yourself for a career, you know, get a good job. And I think the third is the social element of university where students are coming to network, to have fun, enjoy themselves. I think, like, those are the three broad objectives for students, and every student weights those differently. Like, some students prefer, you know, they're coming to learn, and they don't really care about the social aspect of uni, and there are other students who, you know, they're coming because they want to get a good job, and they want to enjoy university, and they don't really care about the learning really. And I think AI and especially how students use AI is very telling of those motivations. You know, there are some students who are using it to complete work for them; you know, to do it on their behalf. And those are typically the students who want to save time and want to, you know, put their efforts and motivations towards other things, which is fine. And there are some students who, you know, are staying away from AI, or using it proactively. They're using it in ways that reinforce their learning, that make them better and make them stronger. And those are typically students that, they want to learn themselves, they want to, you know, have some depth to their knowledge. And so, I think that's what AI is revealing, like, why you're really at university, because we have the tools now, to be honest, to get through university without actually learning much. It's our responsibility now as students to, you know, use this tool to, you know, achieve your own individual outcomes. If you want to learn, you can. And bypass, you know, a lot of the exams and assignments, you can pretty much do that. And I don't think there's gonna be any sort of, like, rules or regulations that come in place that can change how students use AI, because, like, fundamentally, I don't see how that would be possible. And so, I think the responsibility is in the student's hands; it's like, you're in control. - Yeah, definitely. I actually agree, and I think a lot of, for how I use and approach AI is, like, intention. I think, even before I actually start prompting or asking it to do stuff, I like to think about am I asking it to, for example, directly complete a task for me? Or is it more of, like, something that I'm brainstorming and I'd like to think about it from different perspectives? And I think that piece is, like, I'm starting to see a lot more happen, 'cause I think AI is very good as, like, a catalyst for especially implementing and building things. But the intention, I think, really comes from the students themselves. - I really resonate with that. I think, when these AI chatbots start coming out a few years ago, either because of the technical limitations back then, or just of how little we understood about AI at the time, the typical workflow was just you ask the chatbot a question, you get an answer, and you do that maybe, like, 50 to a hundred times across different conversations. Now I think people are becoming smarter and, like you said, are becoming more intentional with how they're using it. We're starting to have, like, more extended conversations across talking about one specific topic. I've started, like, when I'm studying, I'll have projects on Claude where I would have one for each class; upload, like, the syllabus and a bunch of different course content I'd take in for each project, and have a bunch of conversations acting as, like, individual files in, like, a folder for example. And with these chatbots being able to, in recent years, manage context better, manage memory better, be a much more helpful assistant and, I guess, conversationalist when, like, working with me on a specific task. You wonder how long it'll take before the societal aspect of things are gonna catch up to how fast the technology is evolving. Right now, with one example, is that in, like, CS classes, I know a few professors who do say, like, "Hey if you do use AI, like, you can put a disclaimer in, like, your assignment and also describe, like, how you use it in, like, each homework or a lab assignment. " But there isn't really, like, a integrated, like, framework thinking about, like, using AI in the class as part of the curriculum. And I think we're still kind of waiting on integrations like that into, like, education that we may see in the next, like, five years.
- So, you feel like, in general, your professors and the administration might be a little bit behind the students in terms of AI literacy and adoption? - Not mine. - Yeah, I think they're still adapting to it, and I think, naturally, students are more like the fastest adopters because we're just reacting to, like, what's out there, and we access information a lot quicker because we're, like, native to the internet. - Yeah. - I have to say, I've seen some, like, pretty cool advancements in some of the courses at my university. So, we have a course called LSE 100, and every first-year student has to take it. And when I did it two years ago now, there was no, I mean, we had AI, but there was no guidance on how it should be used for this course. My brother now actually is in first year, and he's doing the course at LSE, and he's told me it's completely changed. So, they basically give you guidance on how to use Claude. So, they say you should have a conversation with Claude, give it a persona. So, they're giving guidance to students on how to actually use these and use Claude for ways that aren't just direct outputs, you know, like getting the answers for your problems, but actually a conversation with it. And then, they ask for the conversation log because they wanna see, you know, how are you interacting with it? Are you asking, you know, good questions back, and is it a good conversation? And then, they film a video instead of putting an essay together. So, now it's a video of yourself. And so, you are encouraged to use AI, but now in terms of, like, the marking, you know, you can't use it irresponsibly. - I have also noted that for some of my classes. Like, the machine learning class I was taking this semester, they have their own chatbot actually they built to specifically answer student questions, and if they wanna refer to lecture notes specifically, it's pretty helpful for it. I do think, however, that this is more of a bandaid approach because it doesn't really prevent students from just going to other types of AI tools that is not the school one to just ask for answers and advice. - Yeah. - University is a one-size-fits-all route at the moment where, you know, you have one lecturer for potentially 200, 300 students in a class, and those students all learn, you know, in different ways. And so, AI is acting more as a personalized tutor if you prompt it in the right way and if you, you know, encourage it to do so. And I've seen the learning mode from Claude where, you know, it's asking questions back to you. It's more of a, like, a progressive development of understanding, which is good, and there are students that are using it. But I think, you know, it's about finding the students that, you know, want to learn and want to progress because there are many students that, you know, if one AI tool goes away from, like, giving direct output or giving direct answers, we're gonna see just a shift of students to the other. - Tino, were you gonna say something about this, by the way? - Yeah, I think I was gonna piggyback on what Zain said, like, 'cause at my school, Arizona State University, we are super pro-AI. Our career management center, they built, like, a prompt bank for us for prompts that we can use to, you know, work through different scenarios and roles. They also built, like, for our sustainability class, as well, the professor built her own bot, as well, and we actually, there's a new class that they introduced called Artificial Intelligence Chip Strategy and the Future of Work. And it was taught, like, for one semester, but people were like, "Yo, we need this class," and now it's taught, like, the whole fall and spring. - This is all very positive, which is great
but I know that it's not all positive, it's not all roses. So, I'm curious, what are things that you are seeing that are not on the right track, or things that you're afraid of, or things that scare you? - I mean, cheating is, like, the top three use case; if not, like, top one in universities without a doubt. It just comes from, like, what we discussed. You put in a prompt or some input, and chat gives out an output. And a lot of students, what they started off doing, and a lot of them are still doing, it's just taking that output and, you know, submitting it in a cycle. - I think, I mean, if you look at the interface, it's waiting for a question. We are given the questions from the university. It's never been easier to take that question and put it into the chatbot, and get the mark scheme pretty much. And so, it's just so easy to get the answer, and you really have to be strong as a student to go and work on that problem by yourself and do it yourself. - Yeah, I think a bit more of a nuanced take. I have also noticed that, for even students who are using AI to build their own projects and, for example, to try out different types of, I guess, technical implementations, there's been a really strong sense of ownership shame that I've noticed whenever AI even gets mentioned that, "Oh, when I was building this project, I used AI a little bit," just because, like I said, I think the line between of how much the human is using the AI versus how much is the AI actually just controlling the whole project is very blurry right now. So, especially at the vibe-a-thon when I was for, example, asking the winners, like, "How did you use Claude in your projects? " I had seen a lot of them build out, brainstorm, think through, and, like, really iterate with Claude. But when I asked them that question, a lot of them just defaulted to, "Oh, Claude just, like, was very helpful and they did everything," which I think right now, like, there's a lack of vocabulary and frameworks to, like, regard these types of AI usages, which I also think is what's causing a lot of this polarization effect where schools are just either completely banning it, but students are still using it regardless, hence a lot of the cheating and just, like, not really being intentional or using their brains when they're interacting with AI, which I am a bit skeptical about the direction of this, just because I think students are now required to be the resilient ones in the age of AI where they really need to be skeptical of every single time they use it without guidance from schools and institutions. So, I feel like, if institutions or schools can't really adapt to this quick enough, there is a danger in it just kind of skewing and going into a more polarized direction. - I will say, though, the sentiment and, like, how we interact with AI among students is changing. I think, like, as university students, we naturally do want to use our brains and use it for something that's interesting to us. In the past couple years, yes, people have just been pasting in questions as, like, prompts and taking the outputs to submit as, like, deliverables or assignments. But people are beginning to be more interested in, like, doing something more of that; like, taking more ownership of maybe their assignments, but even more importantly, like, I guess projects on the side or things they want to make or explore. And I think a lot of students just kind of need that little push to see what's available and what's out there. And back to the point about cheating, I think a lot of students are also realizing that AI is pretty bad at cheating in context because there's all these patterns that start to come up like, "Oh, there's, like, a lot of em dashes," or AI has a specific voice or tone, or it doesn't actually understand to the level of what you know about the class, which could be a whole conversation about how students actually know more than they think they do. - Yep. - Okay. Yeah. - Yeah, I agree. And I think students are evolving, you know, with AI. I think, when it first came out, everyone's very excited. Students, you know, were using the outputs directly, but now, like Marcus said, you know, people, students are being more, you know, intentional with their prompts, so potentially, you know, writing a little bit longer prompts; you know, directing Claude a little bit better than before. And I think that's just because we're getting more used to it. Like, myself, as a student, I must have spent, like, a thousand plus hours, like, talking to Claude now. Like, I know, you know, how it responds, and I'm learning more about the tool, and as a result, my interactions with it are getting better. And like you said, we're students. You know, we want to use our brains. The majority of us, you know, want to be intellectually simulated. And so, I think we're moving to a time where students do genuinely use AI tools to benefit themselves and to actually, you know, go further, rather than kind of limit themselves, I guess, by just relying on its output. - Yeah. - I think, when it comes to, like, cheating, for example, you know, you've got that first level of you ask a question, you get your output. But in my instance, the final boss is can you present to us what you think? You gotta put together a presentation, 10 minutes, 15 minutes defend your position, and the AI is not gonna be there, you know, at that time to speak for you or to give your ideas. So, in that way, I feel that there's that, like, first level of, like, using it like you mentioned, like, maybe. But then, you get to a level where you need to, in our case, explain what do you mean and everything. So, it's not so much a case of like, yes there's that level of, like, people are cheating, like, doing just, like, small quizzes. But then, in our instance as well, you actually have to always defend your position, so you have to know what you're talking about.
- Let's talk about, after college, entering the job market. First of all, maybe we can do, like, a thumbs up, down, middle. How does everyone feel about getting a job after graduation? - Like, constantly just like this. - Okay. Okay. Tell me more. - Okay, well, I guess, like, the good ones I think is, like, just having AI to be, like, a better, like, companion for, like, practicing for interviews, brainstorming, tailoring the resumes, et cetera. Unfortunately, the downside is that also companies are obviously using AI a lot more, which involves a lot of Hirevues. I've basically been talking to a, like, a screen this entire recruiting cycle, which is great, but also can feel a little less human because I don't feel like there's, like, no chemistry, like, talking to a screen. - Are you doing interviews with like you're talking to a robot? - Not where it's explicitly, but it's just, like, kind of a question on a screen for me, and then I'm just, like, talking to myself. And I have also heard just a lot of anxiety about companies also using AI just to screen candidates. And I think this also has just not been great for, I guess, like, both my self worth and also just, like, trying to figure out what the best, like interviewing strategy or even, like, what jobs to apply to, 'cause now it just feels so much more random than before. I'm curious, what do you guys think? - I agree with you, especially, like, the screening job candidates. It's so painful because you can realize, like, from the entire, "Hi, I would like to invite you to apply for this job," right up until you submit your CV. You've put time together, tailored your application, everything, and then 15 minutes later, "Sorry, we regret to inform you. " when did you have time, you know, to review. - Yeah, exactly, the AI-generated email. - The AI-generated email. So, that's, like, I think the really, like, big downside of that. The upsides really are that AI fluency has become a major like, for example, consulting firms now, I know the top four consulting firms, they used to hire generalist MBAs, but now they're looking for MBAs who've got AI fluency. So, if you understand, like, how do you apply AI to different industries, then you're, like, their number one candidate. - Actually, back to, like, Chloe's point, I have had an AI, like, interview me before. - Really? - Wow. - And it was so nice. It would give me responses like, "Your response was super invigorating and informative and exciting. " And then, "Let's move on to the next question. " - Did you get the job? - No, but it was because I didn't qualify. I think they were looking for, like, rising juniors, and I was a rising senior. So, I still got auto screened. But it wasn't as bad as I thought, I guess. Traditionally, like Chloe said, like, there's Hirevues right now where, like, they take a recording rather than, like, an interactive conversation. I actually kind of enjoyed having a nice interviewer as an AI. - I agree. Okay, speaking of, you know, interesting uses of AI, Merriam Webster named "slop" the word of the year, so I'm curious what AI slop means to you all, and how do you see it impacting the people around you on campus? - I think, like, AI slop for me is when I receive an output from Claude or any other AI tool that I know that if I had just used my own brain, like, I could have come up with something better than that. Like, that's kind of slop for me. So, I think, just going back to job applications, when I'm asking it to, you know, help me write a cover letter, for example, which is a major use case for a lot of students, and it gives me a cover letter which is so generic; like, every other student is applying with this, and it's like, "This is not gonna get me the job. " Like, that's, you know, the AI slop. - I think it's really funny that, like, AI responses can be so generic that it's its own voice at this point. That's, like, a common, like, meme, I guess, for AI to have a lot of M-dashes and certain sound bites. Like, you're absolutely right, or like, "Let me think about that. " Or it has this, like, two-sentence structure that it keeps giving me whenever I try to write, like, letters or scripts, for example, where it's like, "You're not reinventing the wheel. Like, you're building the next Tesla. " - Yeah. - Honestly, it's everything you guys have said. Yeah, and then, like, you get the feedback, you get the output, and then it's up to you. You know, some people, if you work with them in a group, sadly they'll just paste that, and you could see that at the end, "Would you like Claude to keep" you know. - Oh. - Yeah, that's- Retry. - Claude can make mistakes. Yeah, that's my definition of AI slop. - So, you mentioned group projects, and I think this is a big thing, right? When you have a group of four or five at university, and you have maybe a 5,000 word report due, how do you guys go about it? Because at my university, there is sometimes some students who, like, don't want to use it, and I remember, like, one student was saying, like, "I'm gonna do this project before you guys get your grubby AI hands on it. " And I was like, "Okay. " But, like, some students really- - Did he use that term? - Grubby AI hands. - Oh, man. - Oh my god. - Some students feel really strongly against it, and when you're working in a group, you know, you have to take into consideration other people's thoughts. - True. - Yeah, what are your guys' thoughts on that? - I can go first, 'cause we do a lot of those 5,000 words kind of projects, like maybe create a business case out of this business dilemma. And how we do it, like, how we've recently started working on it is we'll take the paper or the question and we'll, like, create an outline. We'll maybe ask AI, "Can you create an outline for this paper for me? " Like, "What should be in this paper" and stuff. And then, we divide it amongst ourselves. - One thing I like doing a lot is, yes, using that outline and for, like, this example of, like, a 5,000 word report amongst, like, four people split it into different sections, and then for each person covering each section, it's up to you and how you want to use AI, whether you use it or not at all. And what I like to do personally is have a lot of, like, bullet points or just, like, thought-dumping into Claude and working with it to kind of structure my thoughts. So, going from random, like, bullet points or one-off phrases into more of an outline, and then into paragraphs that I can kind of manually edit the wording of so it's more like my voice and tone. And then, one thing I really like asking Claude actually is to give the context of who is usually reviewing my work. For a job application, for example, it's, like, this VP or, like, recruiter, and then, like, in a class it's like a professor or a TA, and I ask, like, "Hey, here are some criteria. Rate my work score out of 10. " And I would do that maybe, like, two to three times, and it would always gimme reasons about, like, why it graded, gave me a score, a certain score. - That's a good idea. - And what I could work and improve on. A lot of times, I like the feedback. Sometimes I think some of the feedback is a bit overzealous or ridiculous. And in newer models, like Sonnet and Opus 4. 5, they're starting to give, like, a bit of urgency whenever I ask it to evaluate my work too much, almost like they're calling me out for overthinking. After maybe, like, the third try of, like, this, like, evaluation, they'll be like, "It's ready to ship. " - Yeah. - Nice. Are there AI slackers in group projects? Like, people who you can tell are not turning on their brains for the project? - With their grubby AI hands. I mean, definitely. I think something that is most helpful for me is, like, obviously besides alignment and just, like, being very intentional when you're using AI, a lot of face-to-face time actually. So, what I like to do when I work on a group project is just block out a time chunk, sit down with my group, and we just talk about it as we work through it. I think, very often, it's easy to feel like you're alone when you're just working on a group project on your self, which is what makes AI so tempting, 'cause you're just like, "Oh, what if I just had someone write it for me? " But if we were all forced to, let's say, sit down and talk together, like if someone had a problem and work through it, I think that definitely helps a lot with the more human piece of working together. - Yeah, - I agree.
- Okay, I'm gonna shift us to some rapid-fire questions. So, each should be, like, one to two sentences maximum. So, my first question is, what is a tip that you have for students right now who are navigating this whole world of AI in education? - Learn it. Learn how to use it. It's only to your advantage if you understand how it can optimize your career, or if you decide to be an entrepreneur, how it can optimize your business. - If you're trying to learn new concepts or revising for exam, start a new project for every class you're taking in university. Try and paste in all the relevant files, and perhaps you already have existing conversations where you've worked with Claude to go through certain assignments and set the writing style to concise mode. That's been most helpful for me to get a quick rundown in an efficient manner of, like, every concept I need to cover for an exam. - Substack and open-source materials. There are so many cool people out there who know the best or newest ways to use different types of AI tools, and what I've found most helpful is just soaking that up like a sponge, and then applying it to my own projects. - Nate Jones on Substack. He's pretty good. My tip would be use the styles. So, you mentioned it briefly, the concise mode. The learning mode is fantastic. If you want to augment your own brain and augment your own skills, use the learning mode. It will ask you questions back. Be confident in your replies back, and you genuinely will get a better output than just leaning on Claude by itself. - All right. Next question. How do you, in one sentence, personally draw the line? How do you draw the line between using AI as a tool and crutch? Where do you find that balance? - If I was in a room like this, and I can't explain or defend what I've built, even if someone asks, like, a super critical or specific question, I think that's the line where you kind of don't really understand what's going on. - I totally resonate with that. It's a mix of, like, the ownership and intentionality. If you can't really explain what you've done along with also including what AI's role was in your work or what you're doing, then that's a line for me. - Yeah. That's another line for me as well. Like, I should be able to explain it like I'm explaining to someone in fifth grade, whatever the output is, and I should be able to present it as well, even at a graduate level, anything that I prepared. So, that's my line. Anything I create with AI, I should be able to give that lower level and that upper level explanation. - Yeah, I agree with all of you. I think, if you're not comfortable with the content that you've produced, at the end of the day, like, is that really yours or are you just stealing that content from Claude? And so, just feeling comfortable, having some sort of, like, feeling of ownership that I've produced this work, that's the line for me. You know, there have been times where I've submitted pieces which are, like, fully AI, and it's just like, this is not gonna take me anywhere at the end of the day. But you learn that, and I think that's the biggest thing with students is that it takes time to learn those feelings, and you kind of have to give it that time. Like, a student might have to submit something 100% AI to realize that, actually, this was not beneficial for me. And I think universities need to be conscious of the fact that students will learn, and you've gotta trust the students, right? At the end of the day, they live their own lives, and, you know, you wanna set yourself up. You have that equality between students, and we'll figure it out. Like, we'll figure out what works, where it's good, where it's not. - Yeah, like holding space. - Exactly. - I feel like that's a fantastic place to end. I just wanted to say, you know, that "We'll figure it out" mentality. This whole time, I kind of expected this conversation to shift into doomerism, and it never quite did. I think, like, all of you are, you know, thoughtfully positive about the future in a way that, I think, is really exciting and really encouraging. So, thank you all for being here, for being honest, and yeah, I really appreciated this conversation. - Thank you. - Thank you for having us.