“AI and especially how students use AI, is very telling of those motivations.” This insight, offered by Zain Mirza, an undergraduate from the London School of Economics, cuts directly to the core challenge facing higher education today. The arrival of generative artificial intelligence has not merely introduced a new technological tool; it has acted as a powerful mirror reflecting the true intentions of students navigating their academic and career paths. In a recent panel discussion hosted by Anthropic, Mirza, alongside fellow university students Chloe Lau (Princeton), Marcus Wong (UC Berkeley), and Tino Mavunga (Arizona State University), spoke with Anthropic’s Greg Feingold about the chaotic, exciting, and often ethically gray landscape of AI adoption on campus.
The current atmosphere in universities is defined by rapid, often unregulated adoption colliding with slow institutional response. Mirza noted that surveys show over 90% of students are already integrating AI into their daily workflows, using it for everything from summarizing lecture notes to drafting assignments and problem sets. This widespread use, however, is not uniform. The panel highlighted a stark polarization: students in technical fields like Computer Science and Data Science are actively using AI to build and innovate, seeing it as a necessary extension of their skillset, while students in humanities and some social sciences remain hesitant due to a lack of guidance or explicit bans from their professors.
The core tension identified by the students revolves around whether AI serves as a tool for augmentation or a crutch for evasion. As Tino Mavunga, a graduate student specializing in Digital Transformation, pointed out, when workloads are heavy, it is tempting to simply offload tasks to AI. She noted that some students use it to "complete quizzes, you know, when you don't have time," treating it as a substitute for critical thinking. Chloe Lau reinforced this concern, stating that in the current "gray area," it is "very easy to just be like, I’m just going to give up and feed this all to AI and not do any of the thinking." This tendency toward "AI Slop," as Marcus Wong later termed generic, uninspired AI output, highlights a fundamental crisis in assessment and intellectual honesty.
However, for students focused on leveraging technology for genuine learning and creation, AI has been transformative, lowering the barrier to entry for building complex projects. Marcus Wong shared that the accessibility of tools like Claude allows students without deep computer science backgrounds to "be able to build their own projects on the side from like ideation to like a working prototype... within the span of like a few days." Mirza recounted how his non-technical friends are now comfortable using command line interfaces and building functional websites, a feat previously reserved for dedicated programmers. The key differentiator, according to Lau, is intentionality. When students approach AI not just for direct answers but for brainstorming, iterating, and exploring different perspectives, the tool enhances their intellectual output, acting as a personalized tutor rather than a cheat sheet.
Universities are struggling to catch up, but some are beginning to shift their pedagogical focus from evaluating final output to assessing the learning process itself. Mirza cited an example from the London School of Economics where a core course now mandates students to engage with Claude, but requires them to submit the full conversation log and a video defense of their work rather than a traditional essay. This move forces students to demonstrate how they interacted with the AI, shifting the assessment focus to critical engagement and strategic use.
This shift mirrors the evolving demands of the job market. The panelists noted that companies are increasingly relying on AI for candidate screening, leading to a feeling of disconnection—as Lau put it, "talking to a screen this entire recruiting cycle." Yet, this reliance on AI in hiring has also created a new requirement: AI fluency. Mavunga observed that top consulting firms are no longer just hiring generalist MBAs; they are seeking candidates who "understand how to apply AI to different industries." This means students who treat AI purely as a crutch will find themselves disadvantaged when their superficial knowledge is tested in real-world scenarios or interviews that demand deep, defensible expertise.
The consensus among these young leaders is that generative AI forces a reckoning: students must now determine their true commitment to learning. The technology itself is neutral; its impact is determined by the user's motivation. If the student seeks depth and intellectual challenge, AI provides powerful augmentation; if the student seeks only a credential, AI offers a quick, but ultimately shallow, shortcut. The responsibility now lies squarely with the individual student to choose genuine engagement over "AI Slop."



