Article
The death of higher education has been greatly exaggerated, again.

Originally posted on LinkedIn on November 13, 2025.
The death of higher education has been greatly exaggerated, again. Ian Richardson’s recent piece in Times Higher Education argues that higher education has “dismally failed” to engage with AI and risks being displaced by nimbler players. His provocation may be useful but his diagnosis is not. First, the easy bit: yes, a lot of institutional AI activity is performative 🎭. A few licences, a detection tool, a hastily written AI policy, an ethics paragraph, and a hope that someone in IT will sort the rest. That is not strategy. That is wishful procurement 💸. But the bigger claim, that “the sector” has uniquely failed to engage, is analytically empty 🧩. There is no single sector. A small, underfunded regional university 🏫 operating under punitive regulation ⚖️ and a staffing crisis does not have the same room for manoeuvre as a well-capitalised research giant 🏛️. Treating them as one homogenous laggard is a category error, not insight. There is also the awkward fact that many universities are already doing serious work: assessment redesign, internal copilots 🧩, sector principles, home-grown tools 🛠️, staff development, data governance frameworks 🔐. Imperfect, patchy, slow, yes. But not the cartoon of aristocratic indifference. The comparisons with banking, consulting and healthcare miss the point. Banks can close branches and nudge customers onto apps 💳. Universities cannot simply “close seminars” and ship everyone to chatbots without colliding with their public missions, professional standards, labour agreements and (quite reasonably) suspicious students. Slowness is not always complacency. Sometimes it is the cost of remembering that education is not just another tech product 🧭. If there is a real risk of irrelevance, it does not come from failing to “embrace AI” with enough enthusiasm. It comes from outsourcing the thinking. If your institution’s AI response is primarily detection tools and marketing copy then you have a problem. If, instead, it is redesigning assessment so that AI is assumed, not feared; putting data protection, academic freedom and labour conditions at the centre; and investing in evidence, not just hype 📊: then you are at least in the right game. AI is not a salvation narrative or a stick to beat academics with. It is an infrastructure question: who sets the rules and how that aligns with what a university is for. By all means read pieces that shout “engage or die” ☠️. Then ask a duller, better question: “Show me, in writing, how our AI choices improve learning 📚, protect people 🧍♀️, and strengthen our public role 🌍. If you cannot, then the risk is not that we are too slow on AI. The risk is that we are no longer doing our job.” 🔗 Times Higher Link: https://lnkd.in/df8shFWY #HigherEducation #AIinEducation #DigitalStrategy #UniversityLeadership #Ethics #AcademicFreedom #AIIntegration