How GitLab Strengthens AI Data Centre Security and Resilient Software Supply Chains
GitLab secures AI data centre operations by combining AI-driven automation with human expertise for threat detection and compliance. Teams are trained to counter AI-specific attacks and manage supply chain risks effectively.

How GitLab Enhances Security for AI Data Centre Operations
Software drives the digital economy, supporting everything from everyday apps to critical infrastructure in industries and governments. For data centre operators, ensuring software is secure and resilient at scale is essential. GitLab plays a central role by offering collaborative development tools trusted by enterprises and public organisations worldwide.
Julie Davila, GitLab’s Vice President of Product Security, leads efforts to secure the platform and the software supply chains it supports. Drawing from experience with NASA, Sophos, Ansible, and Red Hat, she applies practical solutions to complex security challenges. Her team uses GitLab daily, continuously improving processes and resilience.
Collaborating with AI to Identify and Respond to Security Threats
Security teams should view AI as a force multiplier—not a replacement for human expertise. AI is best suited for handling high-volume, low-context tasks such as automated triage of vulnerability reports, initial classification of incidents, and pattern recognition in security data.
Using AI to generate initial security release notes or perform early bug bounty triage can speed response times while keeping critical decisions in human hands. Clear boundaries are essential: AI processes data and performs preliminary analysis; security professionals provide context, validate results, and make strategic choices.
Implement feedback loops to train AI systems based on human corrections. This approach scales operations without losing the nuanced judgment only experienced practitioners offer.
Securing AI Deployments While Meeting Regulatory Requirements
Most organisations consume rather than build AI models, yet they face regulatory scrutiny from frameworks like NIST’s AI Risk Management Framework, ISO/IEC 23053, and the EU AI Act. The first step is to inventory all AI touchpoints—from third-party models to embedded AI features—and map them against compliance needs.
Establish governance for AI integration by documenting the models used, their purposes, and maintaining audit logs of AI-assisted decisions. These records are crucial when regulators inquire about AI’s influence on products or customer outcomes.
Conduct AI-specific incident tabletops because AI systems behave differently from traditional deterministic systems. Practice scenarios such as model drift impacting operations, prompt injection exposing sensitive data, or AI-generated content violating rules. These exercises reveal gaps in detection and response that only surface through hands-on experience.
Upskilling Teams to Counter AI-Driven Social Engineering Attacks
Teaching security teams prompt engineering goes beyond AI usage—it helps them recognize attack vectors. Hands-on exercises where teams attempt prompt injection against sandboxed AI systems expose common manipulation techniques.
Attackers can exploit vulnerabilities like remote prompt injection, compromising AI assistants via external data sources. Awareness of these methods is key to building better defences.
Running ‘purple team’ exercises where defenders use AI to simulate phishing campaigns helps identify markers of AI-generated social engineering: subtle tone inconsistencies, unnaturally perfect grammar, or templated responses disguised as personalised messages.
Encouraging a culture of questioning AI output strengthens overall security posture.
Balancing Innovation with Supply Chain Risk Management
Agentic AI can boost developer productivity, but security practices must keep pace. Pragmatic governance enables innovation without compromising safety.
Implement controls aligned with Supply-chain Levels for Software Artifacts (SLSA) for AI components. Track the provenance of models and training data, ensure build integrity of AI pipelines, and verify AI agent behaviour before deployment.
At GitLab, AI agents are treated as privileged identities linked to human operators for accountability. Establish ‘paved roads’ for AI adoption featuring pre-approved models, secure integration patterns, and applying the same security controls to AI-generated code as to human-written code—just earlier in the workflow.
This approach prevents security issues like AI assistants suggesting insecure code or exposing sensitive credentials. Security teams who provide clear, fast paths for safe AI adoption become enablers rather than blockers of innovation.
For teams looking to sharpen AI and security skills, exploring targeted courses on prompt engineering and AI security fundamentals can be valuable. Resources like Complete AI Training’s prompt engineering courses offer practical guidance for building these capabilities.