"There's no blueprint": A live survey on AI education for vascular surgeons seeks clear answers
The debate isn't whether vascular surgeons should learn AI. It's how, when, who pays, and who delivers. That's the focus of a new Europe-wide survey led by Amun Hofmann, who says plainly: "There's no blueprint."
Why this matters for education leaders
AI is already showing value in outcome prediction and image recognition. Training will move faster than policy if we don't set the guardrails. Education teams have a short window to define competencies, standards, and assessment before ad-hoc workshops become the default.
What the survey is asking
- Should AI training be mandatory in vascular surgery? If so, at which level: residency, fellowship, or CME?
- Where should it sit: national curricula, society-endorsed programs, or separate certifications?
- Who funds it: hospitals, national societies, ministries, industry (with safeguards), or mixed models?
- Who delivers it: national bodies, a supranational group, universities, or accredited providers?
- What competencies matter most: data basics, imaging workflows, model limits, bias, safety, and regulation?
- How should proficiency be assessed: OSCE-style stations, case logs, scenario simulations, or micro-credentials?
Early participation snapshot
So far, about 130 respondents from 17 European countries have completed the SurveyMonkey questionnaire, with a target of 300. Broader input across regions will raise the quality of the final recommendations.
Learning from the endovascular playbook
Two decades ago, endovascular methods scaled through workshops, online modules, and proctoring. AI education likely needs a similar stack: short, repeated practice, hands-on cases, and mentorship-plus clear ethics and safety guardrails that didn't feature as heavily back then.
A practical framework you can adopt now
- Foundations: Data literacy, model basics, clinical validation, limits of prediction, bias, and safety. See the EU AI Act overview for risk classes and obligations (European Commission) and ethics guidance (WHO).
- Clinical applications: Imaging triage, outcome models, perioperative planning, documentation support, audit pipelines.
- Data and tools: DICOM, EHR extracts, de-identification, dataset QA, prompt writing for clinical contexts, model selection, result interpretation.
- Governance: Safety checks, human oversight, incident reporting, version control, vendor due diligence, local data boards.
- Delivery: 60-90 minute modules, case-based workshops, sandbox environments, simulated call scenarios, faculty office hours.
- Assessment: Mini-CEX for AI-supported decisions, image triage drills, reflective logs on false positives/negatives, portfolio review.
Funding and delivery options to consider
- Society-led accreditation with shared curricula and common assessment standards.
- University partnerships for stackable credits and micro-credentials.
- Industry support with strict firewalls: content independence, disclosure, and audit trails.
- Supranational coordination to avoid fragmented standards across Europe.
How education teams can help right now
- Circulate the survey to program directors, training committees, and national societies.
- Add a 10-minute AI education item to your next curriculum meeting or grand round.
- Start a pilot: one imaging workshop, one governance session, one assessment exercise.
- Map current training to the framework above; note gaps, owners, and a 90-day plan.
The opportunity
This survey is a chance for educators to set practical standards before practices solidify on their own. With no blueprint, clear priorities from the field will decide what becomes core training versus optional extras.
If you need curated AI course lists by job to support early pilots, you can browse Complete AI Training: Courses by Job.
Your membership also unlocks: