top of page
Gen-AI Employee Support & Automation Platform

New Coalition Formed to Establish AI Cybersecurity Protocols




A coalition of leading tech companies has unveiled a new initiative to develop cybersecurity and safety standards for artificial intelligence tools. This collaboration promises to establish rigorous security protocols across the industry to prevent malicious hacking.


Historically, companies have varied widely in their approach to cybersecurity during product development, with some adopting weaker security practices. The newly formed Coalition For Secure AI, announced by Google at the Aspen Security Forum in Colorado, seeks to unify these efforts.


The coalition will initially focus on creating software supply chain security standards for AI systems, compiling resources to assess AI risks, and developing a framework to guide the best use cases for AI in cybersecurity.


Founding members include Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, Nvidia, OpenAI, PayPal, and Wiz. The coalition operates under OASIS Open, an international standards and open-source consortium.


Heather Adkins, Google's vice president of security engineering, emphasized the coalition's collaborative nature, noting that it is not driven by executive orders or regulations but by industry initiatives. She highlighted the coalition's potential to make significant progress in cybersecurity.


Many participating companies have already been developing their own AI security standards. For instance, Google released a framework for securing AI last summer, encouraging companies to review their AI security measures and incorporate automation into their defences.


This coalition marks the first time the industry has collectively addressed AI security issues. It plans to share open-source methodologies, frameworks, and tools to help companies securely implement AI into their workflows.


As Adkins mentioned at the Aspen event, the coalition is open to new members and aims to expand its collaborative efforts in AI security.


Commentaires


bottom of page