Dallas Healthcare Center Scales AI Models With Real-Time Monitoring Layer
The Parkland Center for Clinical Innovation has deployed 19 AI models that generate 34.9 million patient predictions annually. As the organization scaled from proof of concept to production, it faced a problem: monitoring all those models consumed so much staff time that engineers stopped building new ones.
The solution was unconventional. PCCI built an AI system to monitor the AI systems-what CEO Steve Miff calls "AI on top of AI." This monitoring layer watches each model's performance in real time and flags deviations that warrant human review.
Transparency as the Foundation
Miff presented the approach at Convergence AI Dallas, a conference focused on AI trends across the Dallas-Fort Worth region. His core argument was straightforward: transparency drives trust, and trust is what clinicians need before they'll actually use these tools.
One example illustrates the point. PCCI's mortality prediction model for trauma patients runs directly in the emergency department's electronic medical record. Rather than showing only a risk score, it displays the five factors contributing most to that score. Doctors and nurses see the reasoning behind the prediction at the moment they make care decisions.
This approach proved useful both for compliance and for adoption. Clinicians were more likely to trust and act on predictions they could understand.
The Staffing Trade-Off
PCCI's models address serious conditions: pediatric asthma, HIV, sepsis, colorectal cancer. The organization identified 2.8 million people as candidates for early intervention based on high-risk predictions.
But each new model added monitoring work. "I'm going to lose all my team because they came to innovate," Miff said. "They didn't come to monitor a model."
The AI monitoring layer freed staff to focus on development rather than maintenance.
Responsibility Stays With the Organization
Mirna Abyad Baloul, an IP strategist and lawyer, offered a legal perspective during the session. She emphasized that responsibility for AI outputs cannot be outsourced or transferred to the technology itself.
"If you have a lawsuit, I can't sue AI," she said. The company that deploys an AI system remains liable for what it produces.
Baloul recommends combining automated quality checks with human oversight until organizations are confident that AI outputs meet their standards. Government regulation of AI lags years behind the technology, she noted, placing the burden on companies to govern themselves.
Scale Without Sacrificing Safety
For organizations deploying AI for Healthcare, the lesson from PCCI is that transparency and monitoring must be built in from the start, not added later.
Miff emphasized that end users need to trust the models will perform as intended. That trust doesn't come from the AI itself-it comes from the organization's ability to explain, verify, and continuously monitor what the system does.
PCCI operates as a nonprofit with 40 employees but is affiliated with Parkland Health, Dallas County's publicly owned hospital system. That structure gives it the agility to innovate quickly while having access to real-world deployment at scale.
Your membership also unlocks: