Government Research Finds Cyber Red Teams ‘Deeply Sceptical’ of AI
A recent study commissioned by the Department for Science, Innovation and Technology reveals that cyber red teams remain cautious about the role of artificial intelligence in enhancing cyber defence strategies. These teams, responsible for simulating attacker techniques, express skepticism about AI’s current capabilities and its practical value in offensive cyber operations.
Limited Impact of AI on Offensive Cyber Security
The research, conducted by cyber consultancy Prism Infosec, explored how emerging technologies are being integrated into commercial offensive cyber services. The findings indicate that AI is expected to have only a marginal effect on red teams' ability to test and probe organisational security setups.
Interviewees consistently described the AI hype as overstated, with many commercial products exaggerating its real potential. This has led to confusion about AI’s actual capabilities within the offensive cyber sector.
Current AI Use and Barriers
The most common application of AI by threat actors today appears to be in sophisticated social engineering attacks. However, ethical concerns, data privacy issues, costs, and security vulnerabilities of public AI models limit wider adoption by professional red teams.
Despite these challenges, there is cautious optimism that AI might eventually become a useful tool in the offensive cyber toolkit. For now, teams rely largely on expert human skills rather than automation.
- Private, tunable AI models hosted by cybersecurity firms could enhance services like attack surface monitoring and vulnerability prioritisation in the future.
- Until such technologies mature, manual, specialised human efforts remain central to offensive cyber operations.
Shifts in Technology and Focus Areas
The study also notes a surprising lack of attention to blockchain and cryptocurrencies among red teams. Instead, the migration to cloud-based architectures has had a more significant impact, driving new tooling and practices in response to evolving client environments, especially following the COVID-19 pandemic.
Another noteworthy point is the sector’s lag in developing offensive capabilities for non-Windows platforms such as MacOS, Linux, Unix, Android, and iOS. This gap limits opportunities to leverage AI for these environments due to a lack of research and published tools.
Balance Between Offensive and Defensive Cybersecurity
Red teams perceive a more balanced landscape between offensive and defensive cybersecurity efforts. The heightened focus on defensive measures has made offensive operations more challenging, encouraging a cautious approach to sharing knowledge to avoid hastening defensive countermeasures.
Offensive professionals report that the bar for effectiveness is rising, requiring deeper coding expertise, automation skills, and adaptability. Traditional techniques are less effective, pushing red teams to discover short-term vulnerabilities rather than rely on known exploits.
As defences become more sophisticated, information about new offensive tools and methods may remain restricted until these are neutralised by defensive measures.
Government’s Own Red Team Initiatives
The civil service operates the Government Security Red Team, known as OPEN WATER, which mimics attacker tactics to test departmental defences. This group, alongside external cyber firms engaged by government agencies like the Ministry of Defence and the Government Digital Service, plays a critical role in identifying vulnerabilities through simulated attacks and reconnaissance.
These efforts reflect the ongoing need for robust, expert-led offensive cyber testing amid evolving technological and threat landscapes.
For professionals interested in advancing their understanding of AI applications in cybersecurity, resources such as Complete AI Training's latest AI courses offer practical guidance on emerging tools and techniques.
Your membership also unlocks: