Article
Computer says yes: why I’d prefer to be fired by AI I

Originally posted on LinkedIn on September 29, 2025.
Computer says yes: why I’d prefer to be fired by AI I was made redundant by a panel so mysterious it made the REF panel look like Eurovision. No minutes. No rationale. When I asked for the reasoning, the collective response was administrative amnesia: everyone present could recall nothing and apparently wrote down even less. I left with a letter and a shrug. Here’s the heresy: I would rather have been fired by AI. Not because I want a robot to decide my livelihood, but because an AI-assisted process (done properly) would have forced the humans to be accountable. Computers don’t get tired, forget to take notes, or lose the rubric halfway through. Used as a clerk with a perfect memory, AI could have given me what I didn’t get: a neutral, detailed, auditable explanation. What that looks like in the real world: 📏 Criteria frozen up front. Role requirements, weightings, and acceptable evidence published before anyone reads a self-assessment. If the ruler changes after you’ve seen the answers, it isn’t a ruler. 🧾 Evidence in, waffle out. The system compiles the relevant data and maps it to the criteria. No vibes, no prose gloss. 🪟 Transparent summaries, human decisions. AI produces side-by-side summaries with links to source documents; a named human panel makes the call — on record. ⚖️ Bias and adverse-impact checks. Automatically flagged and reviewed before anything is final. 🗣️ Reasons you can read. Point-by-point explanations tied to evidence, written in plain English and sent to the person affected. 🗃️ An audit trail that exists. Model/version, prompts used, who reviewed what, when. Boring? Yes. Also the difference between “we can show our workings” and “we, er, can’t find them.” ✍️ Human signature that means something. A single accountable sign-off that says, in effect, “I’ve read this, I stand by it.” Notice what AI is not doing here: it isn’t choosing who goes. It’s the tireless administrator, sorting, cross-referencing, and ensuring that when a difficult judgement is made, it can be explained without theatrics or telepathy. In my case, the process produced no reasons and no record. If your system can’t explain itself, it’s not a system; it’s a shrug with stationery. So yes: computer says yes - to criteria, reasons, and a paper trail. Then a human says the hard bit out loud and owns it. If you’re designing or defending these processes: could you explain every step to the person across the table and feel comfortable doing it again in a year? If not, you don’t need fewer tools; you need more daylight. #HigherEducation #UKHE #Ethics #AIinHE #Accountability #Transparency #GoodGovernance #HR #Leadership