Deadline: Oct. 27 - OSTP wants your input on federal rules blocking AI
Developers, ML engineers, security leads, and health IT teams: the Office of Science and Technology Policy is asking for concrete examples of federal regulations that slow or stop AI development, deployment, or adoption. Health care is specifically called out, but feedback is requested from all sectors.
If a requirement, assumption, or compliance framework adds friction without improving outcomes, this is your chance to document it and propose fixes.
What OSTP is requesting
- Examples of regulations, guidance, or legacy standards that add unnecessary cost, time, or risk to AI projects.
- Technical requirements that block modern ML ops, continuous delivery, or model iteration.
- Compliance frameworks that create conflicting or outdated obligations for AI systems.
How to submit by Oct. 27, 2025
- Go to regulations.gov.
- Search for and select docket ID: OSTP-TECH-2025-0067.
- Upload your comment and any attachments. Include specific citations where possible.
What to include in a high-impact comment
- Rule and citation: Name the regulation, guidance, or control (e.g., exact CFR section, OMB memo, certification rule, or audit requirement).
- Technical impact: Describe the blocker (e.g., bans modern encryption, forbids managed services, disallows continuous model updates, or requires outdated formats).
- Operational cost: Quantify delays, compliance hours, rework, incident risk, or missed outcomes.
- Proposed fix: Offer a change: revised wording, risk-based exception, updated standard, pilot program, or mapping to current NIST/ISO controls.
- Safeguards: Show how the fix keeps privacy, safety, and security intact (testing protocols, monitoring, human oversight, audit trails).
Common blockers IT and Dev teams report
- Model updates: Rules that treat every ML update like a full redeployment instead of supporting safe change control and performance monitoring.
- Security baselines: Controls written for static apps that conflict with cloud-native, GPU-heavy, or containerized AI workloads.
- Transparency requirements: Vague "explainability" language without accepted methods, creating moving targets for approval.
- Data access and sharing: Ambiguity that stalls de-identified data use, synthetic data, and privacy-preserving learning methods.
- Procurement and ATO: Multi-month processes that do not account for AI lifecycle speed, sandboxing, or phased risk gates.
- Health care specifics: Conflicts between clinical safety validation and real-world model drift management; unclear expectations for post-deployment monitoring.
Tips for stronger submissions
- Attach short case studies with before/after metrics: time-to-deploy, incident rate, model performance, or patient outcome deltas.
- Map proposed changes to existing frameworks you already use (e.g., modern SDLC, MLOps, incident response, data governance).
- Offer a phased or pilot approach with guardrails instead of a blanket exemption.
- Coordinate with legal, privacy, security, and clinical leadership where applicable.
Related policy activity to track
- Senate HELP Committee is exploring potential uses of AI in care delivery.
- HHS doubled funding for childhood cancer research to accelerate AI-based projects.
- FDA requested feedback on how to measure AI-enabled medical device performance.
- OSTP's request focuses on laws and regulations that may hinder AI development and adoption across sectors, including health care.
Next steps for your team this week
- List your top two regulatory blockers and gather evidence from recent projects.
- Draft a concise comment using the structure above; keep it specific and actionable.
- Review internally, then submit via regulations.gov under docket OSTP-TECH-2025-0067 by Oct. 27.
Skill up your team
If you need team-ready resources to build practical AI, MLOps, and governance skills, browse role-based programs here: AI courses by job.
Your membership also unlocks: