AI adoption hinges on data quality: inside Clinical Architecture's PIQI framework

Bigger pipes won't fix bad data. PIQI offers a common, open framework to score and improve clinical data quality so AI, payers, and providers can trust what flows.

Categorized in: AI News Healthcare
Published on: Oct 08, 2025
AI adoption hinges on data quality: inside Clinical Architecture's PIQI framework

Healthcare AI Needs Better Data, Not Bigger Pipes

Interoperability is advancing through initiatives like TEFCA and the growth of QHINs. The pipes are getting wider. The problem is the water. If the data flowing through those pipes is incomplete, inconsistent, or implausible, it drives bad decisions, wasted spend, and worse outcomes.

For AI tools to be trusted in care delivery, they need reliable inputs. If adverse events trace back to faulty conclusions from poor data, adoption stalls. The stakes could not be higher.

What's Failing: Data Quality, Not Transport

Healthcare produces massive amounts of structured and unstructured data. Even so, key fields are missing, labels don't line up, and semantics get lost as data moves across systems. Within a single EMR, data may be "good enough" for local use, but it often breaks when shared externally.

Each EMR has its own dictionary and workflows. Even with ICD-10 and FHIR, real differences remain. Notes reflect each clinician's intent and shorthand, which is hard to translate. Provider organizations sit at the center of this challenge.

PIQI: A Common Rubric for Clinical Data Quality

The Patient Information Quality Improvement (PIQI) Framework offers a uniform, objective way to assess clinical data. Think of it as a standardized test for data. It scores data at a granular level across availability, accuracy, conformity, and plausibility, then pinpoints root causes so teams can fix issues at the source.

Developed by Clinical Architecture in collaboration with Leavitt Partners, the PIQI Alliance brings together payers, government stakeholders, and providers to shape the open-source framework. PIQI is currently moving through the HL7 balloting process, and includes a rubric aligned to USCDI version 3 expectations.

Why This Matters Now

"We are at a watershed moment for healthcare," noted Clinical Architecture Founder and CEO Charlie Harp. Aging populations, rising multimorbidity, more complex medication regimens, and growing use of genomics demand precision. At the same time, provider capacity is shrinking and time with patients is limited.

We need technology to scale care. Without higher-quality data, that effort stalls, and patients pay the price.

Who Relies on This Data

Data quality is not just a provider problem. Payers depend on clinical data to support HEDIS measures and STAR ratings. Government agencies, including Social Security and the CDC, require accurate clinical signals for surveillance and program integrity.

Researchers, investors, HIEs, and analytics vendors all sit in the same value chain. If inputs are wrong, the conclusions are wrong. Everyone downstream loses trust.

PIQI in Practice: What the Scores Reveal

PIQI highlights which data elements are usable and why. Example: an organization's allergy data scores 41% and conditions 52%, while demographics hit 75% and immunizations 77%. Drill-down exposes specifics: demographics are dragged down because birth sex values aren't coded in SNOMED for 2,761 messages, making them invalid for the selected rubric.

Medication data often scores poorly because an indication is missing in the majority of cases. USCDI v3 requires medication indication. If you're searching for patients on Metformin for Type 2 diabetes, missing indication blocks verification. Add the indication, and scores climb from the 70% range to the mid-90s, which changes the utility of the data for value-based care, analytics, and decision support.

The Hard Part: Fixing the Source

Some errors come from mapping or transformation and are straightforward to correct. Others stem from how data is captured at the point of care-templates, defaults, free text, and coding choices. Those require workflow change, clinician engagement, and terminology governance.

PIQI doesn't just grade; it shows where to intervene. That's how quality shifts from a report to a roadmap.

What Healthcare Leaders Can Do Now

  • Set minimum data quality thresholds in contracts with data suppliers and trading partners.
  • Pilot PIQI with a subset of feeds (e.g., ADT, allergies, meds) and publish scorecards internally.
  • Align to USCDI v3 priorities; ensure required elements (like medication indication) are captured and mapped.
  • Standardize terminology use (SNOMED CT, RxNorm, LOINC) and fix known dictionary mismatches across EMRs.
  • Address workflow capture issues at the source, not just in downstream normalization.
  • Measure, remediate, re-measure-treat data quality as an ongoing program, not a one-time project.
  • Join cross-industry efforts like the PIQI Alliance to share patterns, rubrics, and results.

Adoption Will Follow Incentives

Open-source access lowers the barrier to testing PIQI in real-world settings. Clinical Architecture is onboarding early partners, including HIEs, to refine the framework and rubrics. Broad adoption will accelerate if payers and CMS require transparent quality assessments and tie them to programs and reimbursement.

Set clear expectations, measure objectively, and make improvements visible. That's how the industry builds trust in shared data and clears the path for responsible AI.