Building AI in Healthcare with Indigenous Knowledge at the Core
AI is showing up in clinics, dashboards, and decision support. But for Aboriginal and Torres Strait Islander peoples, most systems miss the mark because they ignore cultural context and Indigenous data. That gap leads to poor recommendations, mistrust, and wasted investment.
As Dr Andrew Goodman puts it, "AI isn't intelligent. It's totally reliant on the data it's trained on and the algorithmic lens that's applied to it." If models are built on Eurocentric, deficit-focused data, they will produce deficit-focused outputs. Healthcare leaders can fix this-but only by putting Indigenous voices at the center.
What the research found
Australia's national science agency and Indigenous partner organisations spoke with 53 people across executive leadership, service management, research, administration, and IT. The message was consistent: build responsible AI with Indigenous knowledge, data, and control embedded from the start.
The project was co-led by CSIRO's Australian e-Health Research Centre (AEHRC) with partners including VACCHO, ATSICHS Brisbane, the Centre of Excellence for Aboriginal Digital in Health, and the Australian Indigenous HealthInfoNet. Early findings form a clear starting point for safer, more effective AI in care.
Three priorities to make AI safe and useful
- 1) AI health literacy and appropriateness: Equip teams to make informed decisions about where AI fits, what problems it should solve, and how it aligns with community values.
- 2) Indigenous data sovereignty and governance: Train models with an Indigenous lens, reduce Eurocentric bias, and ensure Indigenous-controlled access, review, and evaluation of data and algorithms.
- 3) Self-determination in design and deployment: Indigenous partners lead development, set guardrails, and approve implementation-so AI is culturally safe, relevant, and genuinely useful.
Practical steps for healthcare leaders
- Stand up an Indigenous-led AI governance group: Include Elders, community-controlled health representatives, clinicians, data stewards, and legal/ethics advisors.
- Audit your data pipelines: Identify where deficit framing exists, where community permissions are unclear, and where bias enters (collection, labeling, sampling, feature engineering).
- Shift consent from "extractive" to "relational": Put community-agreed purpose, benefit sharing, and opt-out rights in writing. Review consent on a schedule-not just once.
- Co-design use cases: Start with problems that matter to local services (e.g., recall systems, care coordination, language-inclusive triage). Validate priorities with community before you write a line of code.
- Build evaluation that reflects culture and care: Beyond accuracy, track cultural safety indicators, subgroup performance, community satisfaction, and unintended consequences.
- Procure with accountability: Require vendors to disclose training data sources, bias mitigation, subgroup testing results, and pathways for Indigenous oversight.
- Pilot slowly, in context: Run small trials with Indigenous governance approval. Pair every pilot with a post-implementation review and a rollback plan.
- Invest in skills: Train clinicians, managers, and data teams on AI basics, risk, and cultural safety-then refresh annually as models and policies change.
Guardrails that prevent harm
- Data sovereignty by default: Indigenous-controlled data access, storage location, sharing rules, and termination rights.
- Bias checks that matter: Test performance by language group, geography, age, and comorbidity. Document gaps before deployment.
- Human-in-the-loop for clinical impact: AI suggests; clinicians and community context decide. No auto-accept of recommendations in high-stakes settings.
- Transparent feedback channels: Simple reporting for issues, plus response SLAs and public change logs.
How to measure progress
- Improved care access metrics (timely follow-up, reduced missed appointments) in Indigenous communities.
- Fairness metrics: minimal performance gaps across Indigenous and non-Indigenous cohorts.
- Documented community approvals, periodic reviews, and benefit-sharing outcomes.
- Workforce confidence: rising AI literacy scores and appropriate use rates.
Dr Goodman and collaborators are clear: if AI is to benefit "our mob," it must reflect Indigenous voices, data, and ways of knowing-with Indigenous-led governance at every step. Anything less risks repeating past harms.
Where to learn more
Build AI literacy across your team
If your service is setting up governance, pilots, or procurement frameworks, structured learning helps. See curated options here: Latest AI courses.
Your membership also unlocks: