New Framework Helps Companies Build Secure AI Systems
Why It Matters
Artificial intelligence offers clear business benefits—improving customer experience, increasing efficiency, and enhancing risk management. Yet, many organizations lag in securing AI systems from the start. “People are trying to figure out how best to use AI, but few are thinking about the security risks that come with it from day one,” says Keri Pearlson, a senior lecturer and principal research scientist at MIT Sloan.
To address this, Pearlson and Nelson Novaes Neto, CTO of Brazil-based C6 Bank, created a framework that guides technical leaders through key security questions early in AI development. Their report, "An Executive Guide to Secure-by-Design AI,” condenses hundreds of technical considerations into 10 strategic questions. These questions help identify risks early and align AI projects with business goals, ethical standards, and cybersecurity needs.
Why AI Risk Demands a Different Approach
AI systems differ from traditional software. Their reliance on data, continuous learning, and probabilistic outputs opens new cyber threat vectors, including:
- Evasion and poisoning attacks: Malicious inputs that skew outputs or corrupt training data.
- Model theft and inversion: Stealing proprietary models or reconstructing sensitive data.
- Prompt injections: Manipulating inputs to force harmful or data-leaking behavior.
- Privacy attacks: Exploiting AI vulnerabilities to access confidential information.
- Hallucinations: AI confidently producing false outputs that damage trust.
These risks can’t be patched later. Existing security frameworks cover parts of the problem but don’t address the intersection of AI, security, and design fully. The new AI Secure-by-Design Executive Framework fills this gap by managing these specific risks effectively.
10 Questions for Executive AI Readiness
The framework breaks down AI security into 10 strategic questions that executives can use early in the development process to steer projects toward safer outcomes:
- Strategic alignment: How can AI initiatives align with organizational objectives, budget, values, and ethics?
- Risk management: What methods will identify, assess, and prioritize AI-specific risks?
- Control implementation: Which controls and tools will mitigate identified AI risks?
- Policy, standards, and procedures: What policies ensure data quality, privacy, ethics, and cybersecurity?
- Governance structure: Who oversees AI development, deployment, security, and operations?
- Technical feasibility: Is the AI architecture compatible with existing infrastructure?
- Resource allocation: What security effort is needed, and how will resources be assigned?
- Performance and security monitoring: What metrics track AI effectiveness and security?
- Continuous improvement: How will ongoing monitoring and adaptation be supported?
- Stakeholder engagement: How will AI security, privacy, and ethics be communicated to build shared responsibility?
These straightforward questions help uncover deep issues early, changing the trajectory of AI projects before costly mistakes occur.
Applying the Framework: The C6 Bank Case
C6 Bank, a digital-only bank with over 30 million customers, applied this framework to balance innovation with security. By addressing each question, C6 identified 19 critical design considerations, including the need for model-agnostic infrastructure and new governance models for AI risks.
They developed a platform that separates experimental AI projects from those directly serving customers, allowing innovation without compromising trust. The framework also improved resource planning and guided the creation of internal tools, governance processes, and best-practice guides. Legal and compliance teams used the framework to draft an AI-specific manual outlining regulatory expectations, which helped build stakeholder confidence.
A Smarter Starting Point for AI
As AI becomes central to operations, designing secure systems from the outset is crucial. This framework doesn’t eliminate all risks but offers a practical foundation for better questions, clearer decisions, and more resilient AI systems. “The most powerful thing about these 10 questions is that they force you to think ahead,” Pearlson notes.
Executives ready to integrate AI securely can view this framework as a tool to prevent security from becoming an afterthought—helping protect trust and value in AI-driven initiatives.
Your membership also unlocks: