Deloitte AI report controversy is a wake-up call for HR: Set vendor AI rules, demand audit trails
The federal government awarded Deloitte up to $1.1 million to help Employment and Social Development Canada design a reusable process for developing and deploying AI solutions. Soon after, Deloitte faced public scrutiny for reports in Australia and Newfoundland and Labrador that included fabricated citations.
Ebrahim Bagheri, professor of information and responsible AI at the University of Toronto, says employers should start from a realistic premise: "There should be an assumption for anyone who's entering into any contractual agreement that Gen AI will be one way or the other used in the process." Even if a company restricts AI use on paper, enforcement is hard - and buyers have little visibility into how consultant teams actually produce content.
Bagheri cautions against assuming this kind of AI use is knowingly approved by senior leadership. "I don't think that the company as a whole knew that one of the people in the project...were actually going to use Gen AI... And I don't think realistically that the company actually knew that those citations were hallucinated."
Assume AI is in the workflow - then govern it
Blanket bans don't work. HR and procurement should assume vendors will use LLMs somewhere in their process and set enforceable rules around that use - in writing. That starts with contract language that defines how AI can be used, by whom, for which tasks, and how outputs are verified.
Put a "permissible procedure" into every vendor contract
Bagheri's core advice: make AI use explicit and auditable. "Contracts should outline what is it that's permissible." He adds that rules should cover who can use AI-assisted outputs, how content from LLMs is reviewed, who is informed, and to what extent AI-generated content is allowed in different circumstances.
Don't accept a glossy final report - require audit trails
Newfoundland and Labrador reportedly paid $1.6 million for a report that cited sources that didn't exist. Australia flagged similar issues and received a partial refund. To reduce this risk, change what you ask for.
"Anything that's produced as a deliverable within a contract should have 'audit trails' of the production of the content," says Bagheri. Instead of one polished PDF, require version histories, draft iterations, review notes, and source validations.
Embed oversight and increase sign-offs
LLMs can produce credible text fast. That makes real-time oversight essential. "You want the authorities who are giving their signature on the final deliverable to actually be engaged with the content," Bagheri says.
Have internal subject-matter experts attend working sessions, challenge assumptions, and sign off at key milestones. This boosts accountability and builds in-house expertise.
HR and procurement checklist: build this into your next SOW
- Permissible procedure for AI use: Define allowed tasks (e.g., drafting vs. data analysis), approved tools, who may use them, data that is off-limits, required human review, and fact-checking steps.
- Deliverable transparency: Require versioned drafts, change logs, reviewer comments, prompts or prompt summaries where relevant, model/tool names and versions, and a source register with working URLs/DOIs.
- Verification gates: Stage sign-offs by named vendor and client approvers. Include fact-check samples and citation validation at each gate.
- Warranties and remedies: Vendor warrants no fabricated citations and no unverified AI outputs. Include rework at vendor expense, fee reductions/refunds, and indemnities for plagiarism or IP misuse.
- Data protection: Prohibit entering sensitive data into public chatbots. Require anonymization, API-based use when feasible, data residency commitments, retention limits, and security attestations (e.g., SOC 2 or ISO 27001 if appropriate).
- Tool governance: Maintain an approved AI tool list, ban auto-citation generators without verification, and require disclosure of plugins or third-party services.
- Source quality: Require primary sources where possible, with links, access dates, and validation notes. No "citation to nowhere."
- IP and ownership: Clarify ownership of deliverables, prompts, fine-tuned artifacts, and analysis code. Ensure you receive a license to all work products needed to operate and audit.
- Performance metrics: Track fact-check pass rates, time-to-correction, and defect density across drafts, not just in the final report.
- Right to audit: Reserve the right to review process artifacts, tool logs, and quality checks during and after the engagement.
- Training and attestation: Require vendor staff training on responsible AI and periodic attestations that the permissible procedure was followed.
Shift deliverables from "final-only" to "final + process"
Bagheri's framing is simple: "I want to see the iterations or the versions...so I can see how the report or the final deliverable was produced." Insist on that visibility. It lets you spot issues before rollout, not after.
Align with recognized frameworks
- NIST AI Risk Management Framework for risk controls and governance patterns.
- Government of Canada's Directive on Automated Decision-Making for impact assessments and transparency expectations.
90-day action plan for HR and procurement
- Amend current master agreements with a "permissible procedure" addendum and audit-trail requirements.
- Update SOW templates with stage gates, version history, and citation validation checklists.
- Launch a vendor AI disclosure form (tools used, data flows, review steps, model versions).
- Embed one SME in each active engagement; schedule weekly working sessions and draft reviews.
- Stand up a secure repository for version histories and evidence logs tied to each deliverable.
- Define remedies and fee adjustments for fabricated citations or unverifiable content.
- Run a pilot on one live project; measure defect detection pre- vs. post-implementation.
Why this level of control is reasonable now
"A 200-page report no longer represents intellectual engagement with the subject," Bagheri notes. You're not overreaching by asking for versions, validations, and sign-offs - you're restoring the standard of care that AI has quietly eroded.
Upskill your team
If your HR and procurement teams need a faster path to practical AI literacy and vendor oversight, explore focused learning tracks by job role here: Complete AI Training - Courses by Job.
Bottom line: assume AI is in play, codify how it's allowed, and demand visibility into the work. That's how you protect your organization - and get better results from your vendors.
Your membership also unlocks: