The State of AI Security in 2025: Key Insights from the Cisco Report
As businesses increasingly adopt AI, understanding its security risks has become critical. AI is reshaping industries and workflows, but it also brings new security challenges that organizations must tackle. Protecting AI systems is essential to maintain trust, safeguard privacy, and ensure smooth operations.
This article summarizes key points from Cisco’s recent “State of AI Security in 2025” report, providing an overview of AI security today and what companies should prepare for in the future.
A Growing Security Threat to AI
2024 showed that AI adoption is happening faster than many organizations can secure it. According to Cisco, around 72% of organizations use AI in their business functions, yet only 13% feel fully prepared to leverage it safely. This gap is mainly due to security concerns, which remain the biggest barrier to broader AI use across enterprises.
AI introduces threats that traditional cybersecurity isn’t fully equipped to handle. Unlike fixed systems, AI systems are dynamic and adaptive, making threats harder to predict and defend against. Cisco’s report highlights several emerging threats:
- Infrastructure Attacks: AI infrastructure has become a major attack target. For example, NVIDIA's Container Toolkit was compromised, allowing attackers file system access, code execution, and privilege escalation. Similarly, Ray, an open-source AI framework for GPU management, suffered a real-world attack. These incidents show how weaknesses in AI infrastructure can impact many users.
- Supply Chain Risks: Approximately 60% of organizations rely on open-source AI components. This reliance exposes them to supply chain risks where attackers can tamper with widely used tools. A technique called “Sleepy Pickle” lets adversaries modify AI models even after distribution, making detection very difficult.
- AI-Specific Attacks: New attack methods like prompt injection, jailbreaking, and training data extraction are evolving quickly. These allow attackers to bypass safety controls and access sensitive training data.
Attack Vectors Targeting AI Systems
Attacks can target AI systems at different stages—from data collection and model training to deployment and inference. The aim is often to manipulate AI outputs, leak private data, or cause harm. These attacks have grown more sophisticated and difficult to spot. Key attack types include:
- Jailbreaking: Crafting adversarial prompts that bypass AI safety measures. Despite advances, even simple jailbreaking techniques still work against advanced models like DeepSeek R1.
- Indirect Prompt Injection: Manipulating input data or context indirectly by supplying compromised source materials such as malicious PDFs or web pages. This causes AI to generate harmful outputs without direct system access, bypassing many defenses.
- Training Data Extraction and Poisoning: Chatbots can be tricked into revealing training data, risking privacy and intellectual property. Attackers can also poison data by injecting malicious inputs. Poisoning as little as 0.01% of large datasets like LAION-400M or COYO-700M can influence model behavior, and it costs as little as $60. Cisco’s research showed a 100% success rate against models like DeepSeek R1 and Llama 2. New threats such as voice-based jailbreaks targeting multimodal AI models are also emerging.
Findings from Cisco’s AI Security Research
Cisco’s research uncovered several alarming findings:
- Algorithmic Jailbreaking: Even top AI models like GPT-4 and Llama 2 can be automatically tricked using a method called Tree of Attacks with Pruning (TAP).
- Risks in Fine-Tuning: Fine-tuning foundation models for specific domains can weaken safety guardrails. Fine-tuned models were over three times more vulnerable to jailbreaking and 22 times more likely to produce harmful content compared to original models.
- Training Data Extraction: Researchers used a decomposition method to make chatbots reproduce news article fragments, exposing sensitive or proprietary data.
- Data Poisoning: Poisoning a small fraction of datasets is inexpensive and effective at changing model behavior. Around $60 can poison 0.01% of large datasets, demonstrating low barriers for attackers.
The Role of AI in Cybercrime
AI is not just a target; it is also a tool for cybercriminals. Automation and AI-driven social engineering have increased attack effectiveness and stealth. From phishing to voice cloning, AI helps criminals craft personalized attacks.
The report highlights malicious AI tools like “DarkGPT,” which generate phishing emails and exploit vulnerabilities. These tools are accessible even to low-skilled criminals, allowing them to launch sophisticated attacks that evade standard defenses.
Best Practices for Securing AI
Cisco recommends practical steps to improve AI security:
- Manage Risk Across the AI Lifecycle: Identify and reduce risks at all stages—from data sourcing, model training, to deployment and monitoring. Secure third-party components, apply strong guardrails, and control access tightly.
- Use Established Cybersecurity Practices: Traditional security practices like access control, permission management, and data loss prevention remain crucial for AI security.
- Focus on Vulnerable Areas: Prioritize defenses for supply chains and third-party AI applications where vulnerabilities are most common.
- Educate and Train Employees: Train users on responsible AI use and risk awareness to minimize accidental exposure and misuse.
Looking Ahead
AI adoption will continue to grow alongside evolving security risks. Governments and organizations worldwide are beginning to develop policies and regulations to improve AI safety. The balance between security and innovation will shape the next phase of AI development.
Organizations that prioritize securing AI systems while innovating will be best positioned to manage risks and seize new opportunities.
For those interested in expanding their AI skills and security knowledge, exploring Complete AI Training offers up-to-date courses that cover AI fundamentals, security practices, and practical applications.
Your membership also unlocks: