Artificial Intelligence Industry Steps Up to Make AI Safe and Ethical
As AI-enabled products enter the physical security space, it’s crucial for product development teams, security integrators, MSPs, and consultants to recognize which companies are addressing the critical issues of AI safety and ethics. Solution providers and associations are actively guiding responsible AI adoption, setting standards, and building trust.
What Information Does IT Expect About AI in Physical Security Systems?
IT teams need details on how AI impacts privacy on-site, data security, compliance with regulations, cybersecurity risks, and safeguards to prevent AI errors from undermining security. They also want clarity on who oversees AI use. These factors fall under the broader umbrella of ethical AI use.
Key Considerations for AI in Physical Security
- Respecting privacy rights of individuals on the premises
- Securing personal identifiable information (PII) against breaches
- Meeting regulatory compliance related to privacy and security
- Implementing safeguards to prevent AI errors from affecting system performance
- Mitigating cybersecurity risks, especially in AI-capable hardware like cameras and system software
- Ensuring operational safeguards to maintain security integrity despite AI issues
- Assigning clear responsibility for AI oversight and regular testing
While leading vendors focus on privacy, transparency, fairness, and compliance, some aspects like cybersecurity risks might be managed as part of broader security engineering rather than strictly ethical AI issues. Operational safeguards and oversight responsibility typically rest with the deploying organization, making these critical topics for discussions between customers, consultants, and service providers.
Leaders in Ethical AI Use in Physical Security
Several companies set benchmarks for ethical AI in security systems by aligning with emerging regulations and embedding principles like privacy and fairness into their offerings. Yet, their definitions of ethical AI use may not cover every concern, particularly cybersecurity vulnerabilities. Understanding this helps product developers ask the right questions and take ownership of AI governance.
Axis Communications
Axis Communications integrates responsible AI into its core strategy, emphasizing human rights, privacy, and transparency from the start of product development. The company actively monitors regulations such as the EU AI Act to maintain compliance and promote best practices.
Mats Thulin, Director of AI & Analytics Solutions at Axis Communications, states: “Our approach to AI is rooted in one overarching principle: That AI technology, just like all technologies, should leverage and augment human intelligence, build on respect for human rights and should benefit people and society.”
i-PRO
Formerly Panasonic Security, i-PRO has formalized ethical AI through its Ethical Principles for AI. Certified with ISO/IEC 42001 for AI management systems, the company focuses on transparency, privacy, and continuous improvement, aiming to balance innovation with social responsibility.
Milestone Systems
Milestone Systems was the first video management software company to adopt the G7 Code of Conduct on Artificial Intelligence, committing to trustworthy AI. Their ethical AI considerations cover the entire development lifecycle, prioritizing transparency and human oversight. Milestone also leads efforts to establish industry-wide AI standards.
Thomas Jensen, CEO of Milestone Systems, emphasizes: “We need rules to ensure AI is being developed to serve humanity. But companies should not wait for regulations. They must take their steps to identify and resolve the weaknesses and pitfalls of the AI they develop.”
Security Industry Association (SIA) Taking Leadership
The Security Industry Association has grown more active in addressing AI-related challenges such as privacy, cybersecurity, and governance. With input from industry leaders, SIA advocates for privacy-focused frameworks, PII protection, regulatory alignment with standards like the EU AI Act and NIST Risk Management Framework, enhanced cybersecurity for AI devices, continuous AI performance monitoring, and clear governance structures.
For product developers, these developments highlight the need to engage deeply with AI ethics and security. Asking informed questions and establishing strong governance policies are essential steps to ensure AI-enabled physical security systems are trusted and reliable.
Your membership also unlocks: