Article

Set a high-stakes, online exam on “Natural Language Processing & ChatGPT” for

Justin O'Brien
linkedin
Set a high-stakes, online exam on “Natural Language Processing & ChatGPT” for hero image

LinkedIn post image

Originally posted on LinkedIn on November 11, 2025.

Set a high-stakes, online exam on “Natural Language Processing & ChatGPT” for 600 students and you have not designed an assessment; you have announced a public challenge to the very people best equipped to out-think it with AI. 🤔 At Yonsei University in Seoul, a large number of students are now under investigation for using AI and assorted “creative methods” during a remote mid-term. 👀 The official response: scan hours of surveillance footage of faces, hands and screens, invite confessions with the promise of a zero grade, and threaten suspension for anyone caught later. Students, meanwhile, are caught between three pressures: ⚖️ 1️⃣ First, get the grades. This is not optional. Scholarships, visas, jobs, family expectations: all wired directly into a numerical output on a transcript. 2️⃣ Second, work out the rules. Is AI banned, allowed, encouraged, allowed-but-only-if-we-don’t-notice, or “use it unless you get caught in which case it was obviously cheating”? Policies live in a grey zone of vague institutional statements and fiercely idiosyncratic module outlines. 3️⃣ Third, navigate a system that outsources clarity to “academic autonomy”. In theory, autonomy protects good teaching. In practice, it also protects: “Let’s invigilate via webcam and retroactively negotiate the ethics on social media.” Seriously: if you sat down and tried to design the least educational, least trustworthy way to examine AI-literate students in 2025, you would get very close to “zoom-panopticon MCQ with mass confession offer”. This is not just a Yonsei problem. It is a global HE failure mode. 🚨 If your students can use AI, design as if they will. If your assessment only works when everyone pretends not to know what a language model is, the problem is not the students. We have options that do not involve turning bedrooms into test centres: in-person problem-solving; open-book tasks where AI use is assumed and must be documented; code and workflow notebooks; short orals and vivas; staged submissions that show thinking, not just answers; collaborative work assessed on process and reflection, not who typed the final sentence. The irony is obvious: the students in this case were accused of using the very tools the course exists to teach. 🔄 The question for the rest of us is less “How do we catch them?” and more “Why are we still setting exams that make cheating the only rational interpretation of the brief?” ❓What’s one change you are making - or wish you could make - to create assessments that are AI-inclusive, not just AI-proof❓ Share your strategies and thoughts in the comments. https://lnkd.in/dvgbnrmT #HigherEducation #Assessment #AI #ChatGPT #AcademicIntegrity #EdTech #FutureOfEducation #AssessmentDesign #Pedagogy #YonseiUniversity #GenerativeAI

View on LinkedIn

linkedin social-media