AI system flaws push SHA into chaos - MPs
Lawmakers say an AI tool used by SHA to screen admissions is flagging genuine applications as fraud. The system runs with thin human oversight, creating delays, appeals backlogs, and avoidable stress for schools and families.
If you work in education, this isn't just a tech hiccup. Trust in admissions is fragile. One faulty model can derail enrollment targets, funding forecasts, and the student experience in a single term.
What likely went wrong
- Training data didn't reflect real applicant patterns, so the model learned the wrong signals.
- Risk thresholds were set too aggressively, pushing false positives through the roof.
- No "human-in-the-loop" checkpoint before rejecting applications.
- Weak documentation and explainability, making reviews slow and inconsistent.
- Model drift: data changed, but the system wasn't recalibrated.
- Identity checks and document verification weren't integrated cleanly, triggering mismatches.
- Vendor opacity: limited visibility into features, metrics, or failure modes.
Immediate steps for admissions teams
- Pause auto-rejections. Route all "fraud" flags to a human review queue.
- Publish a simple, time-bound appeals process with a clear contact point.
- Create a manual override protocol with audit logs for every decision.
- Notify affected applicants and set expectations on timelines and next steps.
- Appoint an incident lead. Track metrics daily: flagged apps, false positives, time to resolution.
Fixes for the next 30-90 days
- Audit your data. Re-label samples, remove noisy fields, and document known edge cases.
- Run fairness tests across demographics and applicant types. Publish the results internally.
- Tune thresholds using recent cycles. Validate with a confusion matrix, not vague accuracy claims.
- Operate the model in "shadow mode" before re-enabling auto decisions.
- Red-team the system: simulate fraud, spoof documents, and test adversarial inputs.
- Set review SLAs. If the model flags a case, humans have X hours to decide.
Governance you can put in place now
- Define RACI for admissions AI: who owns the model, data, decisions, and appeals.
- Update vendor contracts: audit access, explanation depth, uptime, rollback rights, and liability caps.
- Maintain a risk register and change log for every model update.
- Complete a privacy impact assessment and document lawful bases for data use.
- Stand up a lightweight review board to evaluate high-risk features before launch.
- Train staff on reading model explanations and spotting common failure patterns.
- Monitor core KPIs: false positive rate, appeal win rate, average decision time, and applicant satisfaction.
Questions to press your vendor on
- What are the training data sources and known limitations?
- Which metrics do you optimize, and can we see confusion matrices by cohort?
- What are the top failure modes and how are they detected in production?
- How often is the model updated, and how do you prevent regressions?
- What explanations are available at decision time, and can we export them?
- How does the handoff to human review work, and what's the SLA?
How to communicate with students and parents
- Use plain language: what the system does, why a flag occurred, what happens next.
- Share timelines for review and a single channel for status updates.
- Offer an escalation path for time-sensitive cases (scholarships, visas, housing).
- Close the loop with a short survey to spot friction you missed internally.
Helpful frameworks (free and practical)
- NIST AI Risk Management Framework - clear guidance for risk, measurement, and governance.
- UNESCO: AI in Education - policy notes and ethics guidance specific to schools and universities.
Upskill your team
Admissions leaders don't need to become data scientists, but they do need literacy in model basics, thresholds, fairness, and incident response. A focused learning path saves months of avoidable mistakes.
- Courses by job role - pick learning paths relevant to admissions and student services.
- Latest AI courses - keep your team current on practical, policy-aware AI use.
The bottom line
AI can speed up admissions, but only if people stay in control. Start with clear governance, tighter thresholds, and a fast human review loop. Measure what matters, fix what breaks, and keep applicants informed at every step.
Your membership also unlocks: