Behind the Blue: Sen. Amanda Mays Bledsoe on Senate Bill 4, AI guardrails, and central Kentucky's future

Sen. Amanda Mays Bledsoe joins Behind the Blue to talk a framework for responsible AI across Kentucky's services. Clarity, oversight, and trust take center stage.

Categorized in: AI News IT and Development
Published on: Feb 17, 2026
Behind the Blue: Sen. Amanda Mays Bledsoe on Senate Bill 4, AI guardrails, and central Kentucky's future

'Behind the Blue': Sen. Amanda Mays Bledsoe on UK, Kentucky and responsible AI development

LEXINGTON, Ky. (Feb. 16, 2026) - Artificial intelligence is moving fast. Kentucky lawmakers want to use what works, protect what matters, and keep the public's trust intact.

On the latest episode of "Behind the Blue," Kentucky state Sen. Amanda Mays Bledsoe - a Lexington native and University of Kentucky alum - joins host Kody Kiser to discuss her path into public service, what she's hearing from Senate District 12, and how she sees UK's land-grant mission serving communities across the Commonwealth.

Bledsoe represents parts of Fayette County along with Woodford, Mercer and Boyle counties. She calls out infrastructure as a top concern - roads and aging water and wastewater systems - while underscoring how higher education, signature industries and health care set the pace for central Kentucky's future.

Responsible AI, not reckless AI

Bledsoe outlines her focus on technology policy, centered on Kentucky Senate Bill 4 - a framework for responsible AI governance within state government. The aim isn't to regulate every spreadsheet or cloud tool. It's to set standards for higher-risk, decision-making systems: disclose where AI is used, require oversight, and make accountability non-negotiable.

"AI is not spellcheck," Bledsoe said, pushing for tougher scrutiny when systems generate new outputs or influence decisions that affect people. She also flags the risk of deceptive AI-generated political content, especially in the final days before an election, where trust in what voters see is critical.

What this means for IT and dev teams

  • Risk-tier your systems. If a model influences eligibility, benefits, hiring, healthcare or public safety, expect extra scrutiny, human review and auditability.
  • Document the stack. Keep a model registry with versioning, training data provenance (to the extent possible), fine-tuning specs, prompt policies and evaluation results.
  • Disclose AI use clearly. Public-facing apps and notices should explain when AI assists or decides, what data is used, and how to get human help.
  • Build for oversight. Add decision logs, traceable outputs, feature flags to disable models fast, and routing to human review for edge cases and appeals.
  • Test for errors and unfair outcomes. Define metrics, run pre-deployment bias checks, monitor drift, and re-evaluate after model or data changes.
  • Respect privacy. Minimize PII, apply retention limits, scrub sensitive inputs from prompts, and keep data isolated across dev/test/prod.
  • Secure the pipeline. Threat-model prompts and tools, defend against prompt injection and data exfiltration, and vet third-party models and APIs.
  • Plan incidents. Set playbooks for model misbehavior, misleading content, or data leaks - including rollback, user notification and post-mortems.
  • Mark synthetic media. Use provenance standards (e.g., C2PA) or watermarks where feasible and label AI-generated political content clearly.
  • Procure with intent. Bake evaluation criteria, transparency requirements and service-level expectations into RFPs and contracts.

For teams seeking structure, the NIST AI Risk Management Framework is a practical reference for lifecycles, controls and measurement. See: NIST AI RMF. For content authenticity, the open standard at C2PA is worth adopting.

Policy outlook and UK's role

Looking ahead, Bledsoe points to work on consumer protection, privacy and safeguarding minors online. The state will keep refining its approach as the tech shifts, focusing first where risk to people and services is higher.

She sees institutions like UK driving value on three fronts: research that informs policy and practice, workforce preparation for AI-literate roles, and teaching students to be critical, responsible users of these tools. That mix sets a strong foundation for Kentucky's next decade of digital services.

Listen to the episode

"Behind the Blue" is available on Apple Podcasts, YouTube and Spotify. New episodes drop weekly, covering UK research, medical advances, creative work and university news. Transcripts are available in many podcast apps during playback; older episodes have transcripts on the show's blog.

Level up your team's AI skills

If you're building or integrating AI systems and want structured upskilling in prompt engineering, evaluation and governance, explore these AI certifications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)